
Whenever the debate turns to the ethical usage of AI, we quickly reach the point of debating whether artificial intelligence has morals, if it makes immoral decisions, or if it enables us to make these choices. Yet, it ignores one fundamental truth. Technology in itself is entirely amoral. It is simply a tool.
Yet, because it can output vast quantities of text, some of it even cohesive enough to pretend to be a US presidential statement, we assume that it has the same thought process we have. Thus, let us remind ourselves how AI works, why we should treat it like a tool, and why our current product liability laws are sufficient to deal with technological issues.
The Current AI Morals
When we think about artificial intelligence, we imagine the C3-POs from Star Wars, Data from StarTrack, and the Terminator. They appear as highly logical yet cold robots that try to perform their assigned tasks efficiently. For the former two, it is a recurring theme that their programming dictates their morality and that they misunderstand the complexity of human emotions, reactions, and morality.
Yet, the idea that their programming considers morality is highly misleading for today’s computer tools. When we interact with language models to generate information, prioritize to-do lists, or obtain decision matrixes, we do not interact with a system that makes a genuine choice. We don’t interact with a system that has any notion of morals or an understanding of good and bad.
When developers create an AI today, they generate a list of probabilities that, given an input X, the solution is Y. To create longer text, we chain these together. Thus, when asking a chatbot: “What happened to the murder weapon?” The system can extract that an officer found it in the suspect’s house. Determine that the officer likely refers to a police officer.
Yet, the system infers this information from the surrounding text and the millions of books and articles developers used to train the model. The words themselves are meaningless to the AI. Consequently, AI has no more profound understanding of the answers it gives and, thus, cannot develop an intuition about right or wrong. Any such output would be another mathematical function that tries to weigh each word or phrase.
AI Moral Restrictions
Consequently, restrictions on AI, such as AI’s refusal to create political deepfakes, ransomware, or instructions for making weapons, do not come from moral decision-making. The developers have added these safeguards to the model or its output functions. As such, what we perceive as the ethical limits of AI models is not the morality of the AI. It is either the morals the developers want to impress on the AI or a business decision by the company behind the model to protect itself from the fallout of problematic answers.
The latest case is Grok, the AI available on x.com. Initially, Grok correctly identified that X’s owner, Elon Musk, is one of the biggest spreaders of misinformation. Despite his often-quoted free-speech absolutism, this was a bridge too far for Musk. Consequently, the X excluded the relevant information from Grok, making his answers much more pleasant for Musk.
Even better known is Google’s attempt to imprint its diversity, equity, and inclusion views onto its AI. Consequently, its image-creation tool created black Vikings and American female 18th-century senators. Most critical, it generated pictures of highly diverse Nazi soldiers, which resulted in the grotesque social media battle between white supremacists accusing Google of erasing history while simultaneously trying to showcase the Third Reich as being a progressive, diverse utopia. As expected, Google decided to remove the tool.
A Mirror Of Society
Yet, within all the discussions, we cannot forget that AI is nothing but a tool. Just like a hammer, you can use AI for good and for evil. Similarly, if you plan to do evil, you can always build your AI to assist you, especially if you have a nation’s resources behind you.
Thus, whenever someone tries to impose their morality onto a tool, we should ask ourselves whether this step reflects our values as a society and whether the limits are drawn in a way agreeable to the majority without overly burdening the minority.
Yet, at no point should we confuse the morals of society and the AI developers with the tool’s programming. Once we offload our morals onto a simple computer program, giving up our humanity to an AI becomes too easy.