AI was the overwhelming tech topic in 2023. We got everything from a race to better Large Language models to the reckoning that businesses need to deal with the disruption that comes with AI. Yet, the sector is still in its infancy. From practical applications to the battle between open and closed AI models, there are many areas where we must see great strides in 2024.
Join me in exploring five ways AI might break our collective humanity and how to avoid it.
Timewaster: Better AI Spam
AI has promised us a better life with a focus on things meaningful to us. Secretary services, document summarization, and AI editors have helped improve our performance and learning. Yet, spam, scams, and intrusive ads greatly benefitted from the availability of natural language processing. The ad business will see even more AI usage and targeted ads in the following year.
To counter it, we should become more aware of what we share and which data should be available for anyone to train their AI. Thankfully, controversies like the change in Zoom’s Terms and Conditions will maintain awareness of the issue.
There might also be the slither of hope that, when done well, companies like Market Intend can help salespeople narrow down opportunities and reduce the amount of genuine cold messages going around.
The Layoff Excuse
We have seen numerous doom and gloom predictions about layoffs. So far, companies have announced hiring freezes or changed practices to account for AI. However, the number of tools available doesn’t warrant any changes in hiring. How many of us had secretaries summarize meetings in 2022? How many of us look back to airline or Telco customer service interactions and say, “They solved my problem far too quickly and efficiently.”
AI is a great scapegoat at the moment. The technology isn’t there to warrant changes in hiring or perform layoffs. However, business leaders use it as an excuse.
If this continues, we might see politics respond with AI regulations, which hurt technical development and stop us from making even more leaps and bounds.
Closed Source AI: A Lack of Accountability
Hand-in-hand with the accountability for our business stories goes the responsibility for AI results. We have seen the errors LLMs can spew out, and self-generating AI that might learn from the cesspools of the internet might not be any better. Yet, the closed nature of many of the leading LLMs hides potential conflicts of interest and questionable data sources. Yet, big tech companies’ marketing and financial power have catapulted closed-source LLMs to the forefront of the debate.
We need to ensure that open-source alternatives remain competitive. Regulations must account for the different development and distribution models between open and closed-source AI. Users must stand against source available licenses and missing training data.
Only if we know about the conflicts of interest behind an AI do we know whether or not someone manipulates us.
The Messiah is Coming
2023 saw several “Crypto-Bros” arrested for and charged with financial crimes. The cult of personality, often based on effective altruism or accelerationism, was part of these crooks’ meteoritic rise and hard fall.
Yet, the firing and re-hiring of Sam Altman should serve as a warning that personality cults and the feeling of leaders being irreplaceable are coming to AI as well.
You can coupled that together with the same group of individuals who tried to steer the conversation around crypto is coming to AI. The result should be a strong warning that personality cults are coming to AI and might replace sound decision-making.
We should take a strong stance against it. Otherwise, it will only be a question of time until an FTX-size event occurs in AI. Instead of being blinded by questions about the greater good in AI, let us focus more on practical applications and technological feasibility.
Unequal Customers: AI Weapons and Regulations
While most personality cults in IT focus on altruism or the greater good, regulations march to a different tune. With Cyberattacks having reached the status of weapons of war, the fight about the place of AI in combat is in full swing. The upcoming EU regulation on AI was stalling because they couldn’t agree on exceptions for military applications.
There is no stopping AI’s use in weapons. However, we should demand that humanitarian applications receive as much funding and attention from politicians as weapons do. After all, if we focus on improving lives, we are less likely to need weapons.
AI won’t Destroy Humanity – But Humans Might
Ultimately, even if AI disappeared today, humans would find new technologies and occupations to end humanity. Consequently, it is on each of us to ensure that technological advances serve a better purpose than making us miserable.
Over the past years, technology has helped us connect better with our friends and families, build skills, and receive coaching. Let us unite to ensure that AI will help us advance and not destroy each other.