via  sloanreview.mit.edu

sloanreview.mit.edu

Every Leader’s Guide to the Ethics of AI

Until regulations catch up, AI-oriented companies must establish their own ethical frameworks.

by Thomas H. Davenport and Vivek Katyal

As artificial intelligence-enabled products and services enter our everyday consumer and business lives, there’s a big gap between how AI can be used and how it should be used. Until the regulatory environment catches up with technology (if it ever does), leaders of all companies are on the hook for making ethical decisions about their use of AI applications and products.

Ethical issues with AI can have a broad impact. They can affect the company’s brand and reputation, as well as the lives of employees, customers, and other stakeholders. One might argue that it’s still early to address AI ethical issues, but our surveys and others suggest that about 30% of large companies in the U.S. have undertaken multiple AI projects with smaller percentages outside the U.S., and there are now more than 2,000 AI startups. These companies are already building and deploying AI applications that could have ethical effects.

Many executives are beginning to realize the ethical dimension of AI. A 2018 survey by Deloitte of 1,400 U.S. executives knowledgeable about AI found that 32% ranked ethical issues as one of the top three risks of AI. However, most organizations don’t yet have specific approaches to deal with AI ethics. We’ve identified seven actions that leaders of AI-oriented companies — regardless of their industry — should consider taking as they walk the fine line between can and should.

Make AI Ethics a Board-Level Issue

Since an AI ethical mishap can have a significant impact on a company’s reputation and value, we contend that AI ethics is a board-level issue. For example, Equivant (formerly Northpointe), a company that produces software and machine learning-based solutions for courts, faced considerable public debate and criticism about whether its COMPAS systems for parole recommendations involved racially oriented algorithmic bias. Ideally, consideration of such issues would fall under a board committee with a technology or data focus. Unfortunately, these are relatively rare, in which case the entire board should be engaged.