When we discuss the future, we need to find a way to tap into human creativity while assessing actions and their consequences, argues Petri Rikkinen in the fifth and last part of his blog series on future-proofing decision-making.
This is the final instalment of a 5-part series. You can find the previous posts here:
“The use of knowledge in politics is (and should be, to some degree) purpose driven.” Sitra’s Knowledge in decision-making in Finland report puts it well. Even though I’m writing about the use of mathematical modeling to support strategic decision-making, my interests are in concrete actions and using our knowledge in practice. To learn more about that, I recommend that you start from the top and take a look at the four parts previously published in this series.
In its simplest form, the decision-making process in an organization has three main steps:
- What’s the problem at hand?
- What are the possible actions?
- Which one is the best action?
The issue here is that depending on the person, the problem will invariably look different. That’s due to what we consider certain, personal desires and intent (unconscious bias). It means that both actions and problems are intertwined around what we consider our “purpose”. Finding answers for the three simple questions above is incredibly complicated.
Simon Herbert, a Nobelist and Turing award winner, devoted his life to understanding human decision-making. Today, he is also considered a pioneer in AI decision-making. He suggested the following steps for making decisions:
- The identification of all the possible actions (or alternatives)
- The determination of the consequences of all possible actions
- The evaluation of the consequences of each possible action
He believed that information constrains decisions, and was very much interested in the process of decision-making. Note that his list doesn’t include a single mention about the best course action. Because after all, the best course of action depends on whose perspective you’re looking at it – best for whom? You may have read what happened when Boston Public Schools tried for equity with an algorithm. Even though the algorithm did exactly what was asked, it failed miserably.
Dialogue, mutual understanding and transparency are crucial for decision-making.
I’m interested in the mathematical modeling of human decision-making because it also makes decisions more transparent and easier to communicate. In the EU scenario case I’ve mentioned earlier in the series, we used a multidisciplinary approach that brought together people from various backgrounds from business, non-profits, academia and policy-making. A model can also be wrong – mathematical models are never complete representations of reality, but we need people to give feedback.
Diversity creates surprising new connections and insights that math, in turn, helps us identify. One expert might speak about openness but then opt for actions that create protectionism. Another expert might think that protectionism is actually good for the internal market, though it conflicts with our values, and so on. Seeing the analysis results is a starting point to understanding each decision-maker’s agenda and how they fit together – or don’t.
The analysis behind our model indicated that values don’t have an impact on the well-being of EU citizens. A controversial result for many – haven’t we for example chosen to support everyone equally in Nordics?
When tracing back this insight about values, we could see that experts do indeed consider values in their evaluations, but the importance of values is diluted along the way when making new decisions. This sparked a nice discussion – whether this value erosion in decision-making is also the case in real life.
Unlike in movies, there isn’t a hero who will come and save the world. You can’t just go ahead and solve gargantuan problems such as climate change, but they are something to work on regardless. Strategic issues in companies are also complicated and only possible to work on by a group of experts. They require close collaboration across various different areas of expertise, thinking together, and also bringing each participant's certainties, desires and intentions into the same discourse.
What I propose here is that using better knowledge as the basis for decision-making also requires greater dialogue. And that’s what Sitra’s report calls for, as well.
I’m sorry to say I don’t have any ready answers at this stage – but I’m excited to hear what ideas, if any, these posts on the value of modeling expert opinions about the future, using math to challenge your bias and AI psychotherapy have given you. What do you agree or disagree on, and what new thoughts would you like to share? Please get in touch.
Future-Proofing Strategic Business Decisions