Wise Leadership and AI: Can We Trust AI to Tame Complexity?

A runaway streetcar is moving towards five people lying on the track. You can choose to re-direct it to a side-track, killing one person who is lying on the side-track, and saving the five, or do nothing. What happens when a self-driving car, meeting an unavoidable collision, has to algorithmically choose whether to sacrifice its occupants or risk fatally harming others?

This article offers a deeper dive into the workings of AI - and its ethical implications. We’ll look at its darker side, showing how AI will need a moral framework if it is to be a force for good. We’ll prepare the ground for Article 3 in our series — The Road To Singularity, examining where AI could potentially take us next.

Wise Leadership And AI 2

Bringing ethics into machine learning raises a host of ‘trolley problems’

Business ethicist Edward Freeman has raised this modern version of the ethical thought experiment (known as the ‘trolley problem’). How should ‘robo-ethics’ deal with situations like this? How to guide developers to write the code? Facts are distinct from values.

At what number of people in line for injury or death should the computer decide to shift a car from one person to another? Judgments about moral and ethical choices are as important as they ever were.

Increasingly, big data is influencing, even driving, our world. 20 years ago, about 25% of data were digitized. Today, 97% are. Data really are becoming the new oil. We’re seeing a move from financial capitalism, to a form of data capitalism. Several factors underlie this - the growth in internet traffic, or network effect, the accumulation of massive data sets, and the data capacity and analytical power of computers.

In the first article of this Amrop series exploring Wise Leadership and AI, we argued that if human and artificial intelligence will compete for jobs in a world driven by industry 4.0 (and beyond), they will increasingly collaborate, complementing each other.

Cognitive systems can perform specific tasks, becoming more intelligent by the minute via feedback loops. But entire jobs — and envisioning a future - remain beyond their scope.

We also argued that wise, not just smart, leadership will be needed to bring out the best in AI — the most transformational technology of our age. Without a change in leadership, AI may even pose some risks. The problems and dilemmas of business cannot be solved by algorithms alone. And as the trolley problem demonstrates, a host of ethical issues awaits us.

Main messages

Business and social dilemmas cannot be solved by algorithms alone: A host of ethical issues awaits us.
AI is vulnerable to human flaws: 

Machine learning, designed to emulate the human brain, is also subject to bias.

AI is a black box — trust needs building:

 In the 2017 PwC CEO Pulse survey, 76% cited a lack of transparecy as impeding AI adoption in their enterprise.

AI is far smarter than us - in specialized intelligence:

 It performs tasks faster and better - when limited to a specific domain, such as IBM’s current Watson applications.

AI is solving problems not previously seen as prediction problems: 

Autonomous driving is just one example, solving a host of ‘if/then’ questions.

Consider yourself a conditioning event: 

AI uses conditional probability, ‘if/then’ logic, based on past data, to predict what you’ll do - or want - next.

Boards are still paying lip service to digital technology: 

Digital leaders tell Amrop that boards still lack digital literacy or an awareness of how it can transform business models.

If the app is free, you’re the product: 

More data means less privacy, more speed, more autonomy, less control. These are the current trade-offs.

A few quasi-monopolies will likely dominate the scene: 

The ethical implications - as well as possible systemic failure, cannot be ignored.

As ‘Hackable animals’, we face a philosophical crisis: 

Leading observers and digital entrepreneurs see an existential threat. Society has been built on ideas that pre-dated technology.

‘Amplified bias’ is creating echo chambers: 

AI determines what information to show us, on the basis of choices we already made.

Historical data doesn’t make for a fair future: 

Algorithms base their predictions on past actions, reinforcing old habits, prejudices, and stigmatizing social groups.

Privacy policies need more work: 

Despite data protection, research shows that when (for example) a Netflix customer rates 6 obscure movies out of the top 500, s/she can be identified 84% of the time.

We have every chance of getting it right: 

Research suggests that humans are essentially collaborative and fair. By doing good, we stay in the game.

Handle With Care: 4 Leadership Implications

Even if AI’s, (when properly designed and deployed), are better predictors than humans, we’re still often reluctant to hand them the reins. Understandably so. Where AI outperforms people, companies must carefully consider its operating conditions - including when to empower humans to exercise their discretion and override it.

1 - Sharpen the mission

Corporations that benefit most from AI will clearly specify their objectives — sharpening fuzzy mission statements. Remembering that AI is at its best when serving specific domains of expertise, assistance or automation. Recalling too the methods used to train AI - best tied to clear goals. What makes AI so powerful is its ability to learn, under the right conditions.

2 - Manage the learning loop

 We tend to think of labor as learning, and capital as fixed. Now, with AI, we have capital that learns. So companies must ensure that information flows into decisions, decisions are pursued to an outcome, and learning flows back into the decision-making system. Managing the learning loop will be more valuable than ever before.

3 - Address the ethics

Data-driven markets offer compelling advantages. Innovation and progress should not be stifled by irrational fears or over-regulation. But it is critical to address the shortcomings and ethical challenges, especially regarding the concentration of data and possible systemic failure.

4 - Take responsibility

 A meaningful future requires that corporate leadership takes responsibility. This is a socio-economic phenomenon that only a conscious mind can perform and implies a social contract between humans aiming to progress in a commercially effective manner whilst holding a clear, broader, inspiring (social) purpose in mind.

Downloads