Who here would like to share the content of their last therapy session, doctor appointment, or the last time they were in a lawyer’s office? There’s no need to feel shy or hide anything. We are friends. We are in a safe space where we can openly discuss our experiences. Remember, if you didn’t do anything wrong, there’s nothing to hide.
Have you ever noticed how we’re often more comfortable sharing personal information with a computer than with a stranger? We tend to view computers as impartial, unbiased entities, but is this perception accurate? Let’s delve into this intriguing question and explore the reality of our relationship with computers.
The Creator
Consider a calculator, whether a physical device or an app on your phone or computer. It can swiftly handle complex calculations for you. But you can also take a pen and paper, redo the calculations, verify them, and hold the system accountable. Because mathematics is a realm of clear right and wrong, establishing accountability is straightforward, and it is impossible to hide a hidden agenda.
Today, AI is giving our community new challenges. Whenever software assists humans in making decisions based on shared data and inputs, we must ensure that the human is still in the driver’s seat. Transparency, portability, and community will increase in importance as we decide to hand over menial tasks to computers. Only if we know what is happening within a system can we ensure that AI works for the user’s benefit, not its creator’s hidden agenda.
In the real world, we have seen the errors LLMs can spew, and self-generating AI that might learn from the internet’s cesspools might not be any better. After all, who can forget Google’s magnificent collection of diverse Nazi soldiers? What about the advice for gluing on your pizza cheese? Grok, X’s chatbot, accusing NBA players of vandalism?
These are samples of agendas that weren’t obvious, from promoting diversity to moving fast and breaking things. Yet, they all promoted a different agenda and use case for the product—an agenda the AI creators had built into the system.
Terms and Conditions for Sharing
Another issue with sharing data is the rapid change of the agenda, as seen in the newer terms and conditions. At the beginning of the genAI hype, Zoom got into trouble because its terms suddenly allowed the company to record any meeting and use it for its AI training. After the expected user backlash, they backpaddled and restricted it to the users of their AI products.
However, when Adobe did the same in 2024, even the backlash couldn’t change their agenda. They decided to go from being a company that provides software and services to creators to being a genAI company no matter what. They likewise backpedaled after an outcry.
Sharing as a Risk to our Businesses
Given the hidden agenda and changing circumstances, sharing data outside the company becomes a significant risk management problem. Management and boards must set specific boundaries and ensure that IT and legal departments keep up with any changes.
At the same time, human resources departments must ensure that training and education material is up to the tasks. Given that most AIs interact with employees and customers, technical solutions will not be sufficient. The human resource strategy must adapt to the current situation and ensure everyone is on the same page.
That doesn’t just include direct interactions with AI, such as when using a chatbot to generate or refine content. It must also consider situations when posting data online or in discussion groups.
A few years ago, no one would have considered what would happen to Reddit or Stack Overflow data. Today, they are hot commodities for training language models in discussions and programming. For the better or worse, we have to adapt to the situation. In the future, we cannot assume that services use our data for different purposes.
Stay in Control of Sharing
No matter where AI takes us, we have to get used to the fact that our data isn’t our own. Companies and service providers might consider anything we share with a computer as AI training material. As we all have our trade secrets, confidential data, and unfinished works, we must reconsider the trust we give to software. With hidden agendas and changing terms of services, it will soon become essential to ensure that we have faith in our software and service providers and that our policies and enforcements stay on top of the changing IT landscapes.