From the West to the East, many countries and communities are considering AI regulation. Yet, in their ceil to ban the usage of AI, they often overlook how ineffective the regulation will be for controlling the excesses. Let us dive into the problems of why we shouldn’t believe that a Big Brother regulator will solve the AI problems for us.
Can Someone Make The Problems Go Away?
AI has a tremendous opportunity to harm our society. From disinformation via privacy concerns to personalized ransomware, we don’t have to dive into the worst Matrix-like doom-and-gloom scenarios to find problems with this emerging technology. All of these issues seem impossible for each of us to overcome. Thus, we wish for someone else to take care of and solve the problems AI presents to us. Preferably, the solution doesn’t cost us anything or require us to change our lifestyles.
Thus, suddenly, AI regulations appeared in all corners of the world. From China, we see a focus on disrupting the societal order and party control. The EU is trying to make AI more “trustworthy.” Finally, the courts will settle the question of copyright and responsibility in the USA before our politicians create any regulation.
Yet, in all these cases, the regulation will help the most powerful to defend their vision of the future and their bank accounts from the disruption AI might bring. While they might believe that they are acting for the good of society, lobbying and hidden agendas play a significant role.
Idea Regulation
The fundamental issue with AI regulation is that it tries to regulate the idea behind AI. Yet, a year ago, you could run and modify an AI model to run on a Raspberry PI. Even an AI video generator only requires a $5,000 investment. Thus, while we can push harmful AI off of public servers, we cannot stop harmful ideas with it.
Criminals who want to harm or profit don’t care whether something is illegal. Otherwise, outlawing extortion would have swiftly ended any ransomware attacks. Consequently, we cannot rely on AI regulation to outlaw ideas and keep our society safe from “bad ideas.” Even worse, we may disagree on what a bad idea is. Today, there are more controversial topics than ever before. From stances on illegal immigration to DEI, our politics are full of them. If an AI generates content in favor or against DEI, will either output be considered harmful, or is it just providing the user with content and thoughts? Who gets to make that determination?
The Military – an Exception to Regulations
Yet, the issue with regulating AI is even more cynical when you consider the exceptions. In the EU, oversight does not apply to national defensive application and policing. China likewise will exclude its governing party and state organizations from the stringent requirements.
Thus, regulation stemming from the overblown fear that AI will help people build nuclear weapons won’t apply to companies and organizations with the budget and know-how to create said weapons. Yet, as Stuxnet shows, even top-secret technology the intelligence community uses can leak and create harm on a broader scale.
The Downfall of AI Regulations
Yet, the downfall of any AI regulation is tightly interwoven with the military exceptions. Today, state-aligned hackers create security problems without any fear of repercussions. To bring the war between Israel and Hamas to the US, Iran attacked US critical and financial infrastructures. Even if the US would request the extradition and prosecution of the criminal hackers involved, Iran would undoubtedly deny the request.
It is only a question of time until adversary countries utilize AI to harm countries, companies, and individuals. Once that happens, they will extend the same protection to AI companies and developers they currently extend to their hackers and military members. Thus, AI regulation will not effectively protect us from the worst of AI.
Trust and Education instead of Regulation
With regulation ineffective at protecting us from the worst of AI, we have to turn to trust and education to help solve the upcoming crisis. We must prepare the next generation to face misinformation and a changing cybercrime landscape.
Without investing in quality education, we will soon be unable to differentiate between what humans created and what AI created. Thus, we might soon lose the last trust that holds society together. No AI regulation will change this development. It will only make us more transparent to our administrative big brother. Henceforth, let us focus on more productive solutions.