Back

AI needs responsible adults

Minna Mustakallio • Lead Designer, Data & Ethics

I have been in the digital industry since 1999. For almost twenty years, it’s like one big birthday party for children. We’ve run around the place, gleefully experimenting on internet, mobile, digital transformation… until now. Now AI is ruining the party.

According to the hype, AI brings on the post-industrial revolution. At the moment it’s still evolving – machine learning technologies are being used increasingly to profile customers, help with predictions, for matchmaking, replacing or augmenting humans and, perhaps most importantly, mediating decisions. The results are mostly impressive, and at the same time troubling. A growing amounts of historical data is used to feed algorithms that try to predict future – scaling up the work that statistical analysts have been doing for a long time. Us humans are already struggling to keep up and understand who should decide what in the new equation. It’s less about superintelligence and more about how to work intelligently together – and what the implications are for the decisions we make.

Unpredictable AI agents

We are entering the era of unpredictable AI agents and the ethical questions are anything but trivial. It’s about black box algorithms, complex systems with huge numbers of data points learning from what we think is a sufficient amount and quality of historical data. It’s about the agency problem - about who is accountable in algorithmic decisions. Most of all it’s about understanding that learned systems are not only technical - diverse viewpoints and skillsets are needed to address ethical implications.

There is a whole plethora of ethical issues to consider in algorithmic decision making:

  • Intelligent systems have potential to be used to manipulate us (Cambridge Analytica)
  • They can be used to impose decisions on individuals based on probabilities instead of their actual behavior
  • They may be riddled with bias (like infamous COMPAS)
  • They can be made inscrutable – hiding the why in their black box
  • They can challenge autonomy of their users by treading a fine line between supporting and controlling our decisions
  • They can be used as an ultimate surveillance machine (like China’s social scoring)

This potential for unethical use and consequences is now widely recognized, even by the companies who benefit the most:

“How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?” - Sergey Brin, April 2018

Grownups take responsibility for their actions

We, as the people who work directly on developing intelligent, learned systems have the responsibility to keep in mind what questions we ask from intelligent systems, how we ask them and what the answers are used for.

There’s plenty we can do to make AI fair and safe, but as of yet we - the human in the machine - are not used to working with human-machine mixed decision-making. There’s no tradition, there are no best practices. The gap is currently in understanding the consequences of digital and digitally assisted decision-making, and the gap will only get wider if we don’t start figuring out what to do with it.

We must work on closing the gap on several levels. There is need for work on several different levels:

  • Pracitcal level: Best practices, tools and awareness of ethical questions. This should happen top-down and bottom-up, ensuring the commitment of top management, while concentrating on what guidance and knowledge employees need in challenging, concrete situations.
  • Strategic and cultural level: A company needs ethical principles and responsible data strategies. Finding new ways of working together with the machines and understanding what it means for organizational change and trust (in their ecosystem). It is time for every organisation to discover the principles by which they want to be guided.
  • Societal level: Society needs a vision of what kind of intelligence augmented society we want to build and a regulatory environment that supports this goal, starting with an open dialogue between companies, citizens, researchers and regulators.

One example of already ongoing efforts is Finland’s national working group for ethical AI. The group’s first action is launching a challenge for all Finnish companies to commit to creating their own principles of ethical AI. Futurice has been part of this work and we are already committed – how about your company? And globally, AI is being applied in practically every industry. If you're thinking about what can AI do for your business – get in touch!

Sign up for Futurice news

Futu Connect is our semi-regular roundup of all things Futurice. Be in the know about career-changing events, industry-leading content, non-profit passion projects, and an ever-changing job board.

Enter your email address below.