If you have recently read news about AI, it would be impossible to escape the inexplicable worry about people using AI to build nuclear weapons. Not a day goes by without someone publishing a news article about it. In contrast, wrong results, overblown political influence, and privacy concerns gather significantly less attention. Yet, these will have a more consequential impact on our lives.
Thus, join me in exploring why we should stop worrying about Nuclear Weapons and focus on the advantages of open-source AI.
The Idiocy of AI and Nukes
The Guardian provides a good overview of the process of making nukes. The article is from 2003 and better than most descriptions produced by unrestricted AIs. If you want to get the numbers and the physics and engineering details, head to your local university bookstore and ask for the second-year (US University) or first-year (rest of the world) physics books. The science and engineering theory behind building a modern fusion weapon is relatively straightforward. It is the practice, and the precision needed that limit the hobbyist and most countries from obtaining nuclear weapons. Fun fact: it is precisely the precision required that made the Stuxnet attack against Iranian nuclear facilities successful.
Consequently, attacking open-source AI based on the ability to design nukes, killer viruses, or super ransomware is deeply disingenuous. These activities require technical skills and financial resources beyond what AI can provide. Moreover, nation-states and their associates, who have the resources for any of these projects, can build an AI outside the public view and without any restrictions. The debate about EU restrictions on AI shows that governments are very aware of the potential for military applications.
The Genuine Risks of AI
When analyzing AI for business use, there are more significant risks than someone building nuclear weapons. Boards, Management, and the IT department need to focus on these instead of getting sidetracked by the phantom policy debates.
Hallucinations, misplaced DEI initiatives, conspiracy theories, and wrong answers are currently more significant risks. AI development companies and regulators should instead focus on those issues. These present a current and acute danger to our society right now.
Likewise, users should focus their concerns on actual issues. The recent rulings against Air Canada especially show that current legal risks need more scrutiny and risk management than they currently receive. An AI bot misrepresenting current policies can burden companies with significant costs. Additionally, most suppliers have tight terms and conditions, making it nearly impossible to indemnify against the risks beyond what the suppliers wish to cover.
The Open-Source Solution
With the current risks of AI, it seems strange that companies do not see open-source as a solution. Issues like the biases exhibited by Gemini are impossible to hide in open systems. Likewise, if the training material is known, it becomes possible to identify false and outdated information used to train the model. Especially in scenarios like Air Canada, where the base of a derivative model might have been the culprit, an insight into the origins might have helped to prevent the incident in the first place.
Open-source AI models can also help alleviate privacy concerns. With many current AI models, it is unclear whether or not the data inserted into the model is used to train it further. While it might be of little concern to users for simple chatbots, AI is becoming ubiquitous, and we might soon see it in places like relationship coaching and financial coaching. In neither of these would we want companies to utilize identifiable data for training. Otherwise, an AI might regurgitate our network or social security numbers if prompted correctly.
Lastly, open-source can help us avoid the representation issues that have plagued the debate for the last year. We can avoid either extreme if we have a deep insight into AI. We can ensure that a hiring AI doesn’t choose based on irrelevant criteria, such as deducing that the best candidate is named Steve and played water polo in college. Yet, we can also confirm that an image generator doesn’t falsify the past to create diverse Vikings or nazi soldiers.
Trust and Honesty
Customers, politicians, and society must trust your product to sell AI and AI products successfully. The recent debates about open-source AI are disingenuous and harmful to an honest discussion about the risks and rewards of AI. Unless we put the debate back onto a factual basis, everyone will lose. Open-source AI will not enable any country to build nuclear weapons. It will much less allow individuals to do so.
However, open-source will allow us to investigate any AI more deeply. Thus, we can understand where recommendations come from and whether an offering violates our principles. Transparency fosters trust, and after the latest missteps, trust is something we need for AI to succeed.