Back

On Human–Machine Collaboration

Petri Rikkinen • Emerging business & culture initiatives

Familiarizing yourself with mathematical models prepares you to work with artificial intelligence as your new colleague.

In the fourth part of his blog series on future-proofing decision-making, Petri Rikkinen explains how familiarizing yourself with mathematical models prepares you to work with artificial intelligence as your new colleague.

In the previous parts of this series, I have covered three ideas. The first one was a suggestion for using mathematical modeling to future-proof decision-making – seeing future opportunities instead of simply being dissatisfied with how things are now. The second part, then, was all about using mathematical modeling to uncover your decision-making bias. And finally, the third part highlighted that there is a business case for modeling vague future information.

For the remaining two entries, I will expand my perspective from business decisions towards slightly larger themes. This part will be all about AI and human decision-making, and the final part will focus on social impact.

My original background is in psychotherapy (art and family, to be more precise), so I’ll start off by discussing the psychotherapy of AI. In his 1920 work, Beyond the Pleasure Principle, Sigmund Freud introduced a theory of two drives that motivate us – life and death, Eros and Thanatos. Freud posited that we tend to gravitate towards pleasure while wanting to preserve ourselves, even using aggression or violence as a means. Freud also famously introduced his three-leveled model of human mind. Without delving too deep into those theories, I believe we are currently in a similar situation with this thing we like to call ‘AI’. We want to understand the motives and preferences of artificial intelligence, and what it is ready to do in order to preserve itself, and how its mind is layered.

Today, the most of the intelligent machines are not making decisions by themselves – instead, the decisions most AIs make have been made by their developers already in advance. Our developers are clever. There are numerous machines that help our daily life operate smoothly, e.g. how trains and planes manage to take us where we want. Here, the AI solutions are doing ‘continuous decisions’ based on a large number of existing data and comparing new information to previous cases on record. These machines help us have Eros in our lives – to enjoy the pleasures of life and prevent us from dying.

As machines become more capable of helping us in situations that neither the user nor the program can foresee, we have also started to think more about aggressive AI – case in point: drones that decide who they kill. Discussions around ethical AI are often what Gregory Bateson, systems thinker, would call as double binds. In a double bind scenario, both options you must choose from are wrong and the question only serves to create conflict: “Should an autonomous car rather kill a young person or an old one?” An alternative question without the bind could be, “What level of safety is good enough for autonomous cars?”

An AI making decisions requires that we know ourselves better when creating these machines. When working with intelligent machines, it does help to understand how they work. At their core, algorithms and machines use mathematical theories. When using e.g. the Facebook app, the math is just covered with a skin (hardware and software), a pleasurable appearance (design) and some mechanisms to interact with it (features, buttons, and so on).

The mathematical models we use are not that polished at all. There is a mathematical representation, code, and usually, some non-designed output like matrix or table of results. Yet, with the help of these models, we can for example simulate and evaluate thousands of action combinations, a task far greater than anyone could do alone. The benefit of these explorative systems is obvious. Similarly to when you have a simple car or bike, it’s much faster to repair when necessary, and also a lot easier to understand which part affects what. Regardless of its simplicity, it’ll still take you where you want to go.

When management gets involved with the kind modeling we’ve suggested, it’s important that they also start asking questions about technology, decision-making and how they are related. Their focus should then be to understand why we get certain recommendations, whether they represent our participant experience, why certain theories and technologies are used, and what are their limitations and strengths. That means that if are you familiar with math-supported decision making, you are also better capable of interpreting the results and how we arrived at them. In the process, you will become more mindful and critical of the AI recommendation logic.

When the asking ends, thinking will end as well. So asking better questions before making THE decision never hurts. Practice questioning by downloading and reading our Future Forces 2019 report. We also have tools to help you interpret these forces and what they mean for your business.

Additionally, you might want to have a look at what we think about peace technology, what it is, and how it will change the world: http://www.understandingpeace.net/

Sign up for Futurice news

Futu Connect is our semi-regular roundup of all things Futurice. Be in the know about career-changing events, industry-leading content, non-profit passion projects, and an ever-changing job board.

Enter your email address below.