I have been in the digital industry since 1999. For almost twenty years, it’s like one big birthday party for children. We’ve run around the place, gleefully experimenting on internet, mobile, digital transformation… until now. Now AI is ruining the party.
According to the hype, AI brings on the post-industrial revolution. At the moment it’s still evolving – machine learning technologies are being used increasingly to profile customers, help with predictions, for matchmaking, replacing or augmenting humans and, perhaps most importantly, mediating decisions. The results are mostly impressive, and at the same time troubling. A growing amounts of historical data is used to feed algorithms that try to predict future – scaling up the work that statistical analysts have been doing for a long time. Us humans are already struggling to keep up and understand who should decide what in the new equation. It’s less about superintelligence and more about how to work intelligently together – and what the implications are for the decisions we make.
We are entering the era of unpredictable AI agents and the ethical questions are anything but trivial. It’s about black box algorithms, complex systems with huge numbers of data points learning from what we think is a sufficient amount and quality of historical data. It’s about the agency problem - about who is accountable in algorithmic decisions. Most of all it’s about understanding that learned systems are not only technical - diverse viewpoints and skillsets are needed to address ethical implications.
There is a whole plethora of ethical issues to consider in algorithmic decision making:
This potential for unethical use and consequences is now widely recognized, even by the companies who benefit the most:
“How will they affect employment across different sectors? How can we understand what they are doing under the hood? What about measures of fairness? How might they manipulate people? Are they safe?” - Sergey Brin, April 2018
We, as the people who work directly on developing intelligent, learned systems have the responsibility to keep in mind what questions we ask from intelligent systems, how we ask them and what the answers are used for.
There’s plenty we can do to make AI fair and safe, but as of yet we - the human in the machine - are not used to working with human-machine mixed decision-making. There’s no tradition, there are no best practices. The gap is currently in understanding the consequences of digital and digitally assisted decision-making, and the gap will only get wider if we don’t start figuring out what to do with it.
We must work on closing the gap on several levels. There is need for work on several different levels:
One example of already ongoing efforts is Finland’s national working group for ethical AI. The group’s first action is launching a challenge for all Finnish companies to commit to creating their own principles of ethical AI. Futurice has been part of this work and we are already committed – how about your company? And globally, AI is being applied in practically every industry. If you're thinking about what can AI do for your business – get in touch!
Futu Connect is our semi-regular roundup of all things Futurice. Be in the know about career-changing events, industry-leading content, non-profit passion projects, and an ever-changing job board.
Enter your email address below.