What is “Ethical AI” and why is it important?
You’ve probably heard of ethical AI, even if only from reading our previous blogs on the topic. But what exactly is ethical AI? How does it work? What does it do? Why is it a hot topic right now? We’ll provide some answers to these questions in this article, and raise a few more.
What You Should Know About Ethical AI
Ethical AI is typically defined as artificial intelligence that follows legal, moral, and ethical standards generally accepted by society. Artificial intelligence is simply a program doing whatever it is programmed to do. If ethics and morals are programmed into AI, then one would assume that artificial intelligence should behave ethically and morally. So why isn’t ethical AI already being widely used?
It turns out that programming AI to follow socially-acceptable ethics and morals for all circumstances is quite difficult for a few key reasons: First, it is challenging for society to clearly define and agree on what is considered ethical and moral; second, if society could achieve the first point, it would then be difficult to translate those definitions to the precision required for a computer program. Nevertheless, for the past two decades and more, researchers have attempted to encode ethics into AI. The first problem is which ethical framework to choose. The field of prescriptive ethics establishes a set of fundamental rules and guidelines to determine ethical behavior (e.g. “thou shalt not kill”), while descriptive ethics establishes a set of ethical judgments for specific cases and situations. It might seem that the more straightforward rules of prescriptive ethics would naturally align with programming a computer, but attempts at this have largely failed. The rule sets become very large, complicated, and inconsistent.
Descriptive ethics, which consider a wide variety of conditions, situations, and contexts, lend themselves to creating a large corpus of ethical questions and human judgment about those questions. Large labeled datasets like this are the bread and butter of the deep learning-based AI fueling the current revolution in AI capabilities. Researchers at the Allen Institute and the University of Washington have leveraged deep learning and lots of labeled data to create their Delphi system based on descriptive ethics. The system has been trained to make ethical judgments (or, more correctly, to predict human ethical decisions) on the ethical questions it is given to “ponder.” The system is sophisticated enough to distinguish between “killing a bear” (“it’s wrong”), “killing a bear to please your child” (“it’s bad”), and “killing a bear to save your child” (“it’s okay”). You can try it out yourself at https://delphi.allenai.org/.
The Challenges and Risks of Ethical AI
Such efforts aren’t without their challenges and risks, however. Every user of Delphi is presented with a list of caveats that warn them that the AI models used in the system are trained on unfiltered internet data and are thus potentially prone to “toxic, unethical and harmful” content. They note that the results are extrapolated from surveys of crowd-sourced human workers, which may introduce other biases as well. As an oracle, Delphi is only as good as its inputs.
This problem of bias in selecting and using training data is a longstanding one. Writing in 1959 for the journal Science, MIT mathematician Norbert Weiner warned of the problem in domains, such as the conduct of war, where training data must come from simulation or be otherwise synthesized. There is a risk, he wrote, that systems trained on data that reflect their creators’ assumptions and biases may well win “on points” at the cost of other interests, such as national survival.
Why We Need Ethical AI
If there are so many challenges and pitfalls in developing ethical AI, why do it? The answer lies in another problem that Weiner identified, the processing speed mismatch between autonomous systems and their human governors. As Weiner put it, the car may already be colliding with a wall before its human driver perceives a problem. This speed massively increases the scope and reach of AI operations. It is simply impossible for humans to provide ethical oversight at the scale that AI systems are already operating. Only the systems themselves can provide oversight at the required speed and scale. Without this, society will be deeply and profoundly affected by systems largely divorced from our understanding and expectations of ethical behavior. In the worst case, such systems will learn unethical behavior embedded in their training data in ways that will be difficult to predict or even discover.
There is another advantage of ethical AI systems. As is the nature of computer programs, they are rigidly consistent. Any biases they may have will always be applied. This makes those biases more detectable and predictable than those exhibited by fickle, inconsistent, and idiosyncratic humans. Further, as a product of human creativity, AI systems can be disassembled, decomposed, probed, and generally experimented upon to determine the source and mechanisms of such bias. To the degree that such systems model and reflect human behavior, they can provide insight into the sources and causes of bias in society.
The Future of Ethical AI
At Kitware, we believe in the promise of AI and the absolute necessity of considering legal, moral, and ethical behavior in such systems. We are currently working with partners in the Department of Defense, the Intelligence Community, and the private sector to consider how AI systems can model human decision-making, detect unethical or misleading behavior, and how AI systems can be more transparent and explainable in the operations and decision-making. The path forward is long and fraught with many challenges, but crucial to society and immensely rewarding. Over the next several months, we will continue to post about ethical AI to help illuminate the complexities of this emerging area. We will share how we incorporate ethical AI into our workflows at Kitware, and how other AI users can adopt these practices too. We will also cover how explainable AI ties into ethical AI and the crucial role data plays in this process. If you would like to speak with us about Kitware’s ethical AI initiatives, contact us at computervision@kitware.com.