Kids have always been susceptible to influence and advertisement. They look up to us adults and trust us to have their best interests at heart. Yet, with the advances of AI, the barriers to influencing children are decreasing rapidly.
Many businesses are already exploring improving their marketing and product analysis using AI. From the first AI influencers to personalized messages, we already get bombarded with computer-generated content as adults. It will be only a small matter of time until custom AI applications that target children and teens become available.
Let’s examine their potential and why businesses should be careful when utilizing them.
How AI Targets Kids
Before we jump into the dangers for businesses, let us first look into how AI Businesses use AI today to target children.
Influencers are nothing new. Kids unboxing the latest toys on YouTube are a staple on the platform. The risk of AI is that we can speed up the process and hyper-target an unaware audience. Fake images further allow us to captivate the children by adding cartoon characters or forgoing the live human altogether. Consequently, businesses can create a susceptible audience that doesn’t fully comprehend the idea behind influencers.
Beyond influencers, many kids-focused apps and websites live off the advertisement revenue they can bring in. After all, kids themselves cannot enter contracts or make credit card purchases. Consequently, the higher a child’s engagement with an app, the higher the revenue from ad providers.
Large Language Models can create content on the fly and increase the replay value of games and the re-engagement possibility of educational apps. After all, if you don’t know what to expect the next time you open an app, it creates an incentive to reopen it. Yet, with hallucinations and errors all too common, it is hard to argue that it is possible to include sufficient safeguards.
Both advertisement fields rely heavily on behavior analytics. Yet, the analytics in themselves are a usage for kids’ data. Predicting future interactions and revenue streams based on past actions is a great way for companies to mitigate risks.
Danger: Faulty Data Models
Remember the AI who decided that the best college hire was named John and played water polo? With the amount of data increasing, we create even better profiles. We also create profiles at a time when the brain isn’t fully developed. Yet, with our trust in the machines, we will continue to think that there are no biases and that computers can predict all developmental changes with just enough data.
Consequently, we will rely on faulty data models and predictions to make choices about college admissions and student loans. Just because some kid used an app in a particular way doesn’t guarantee success later in life. We have the data; why not use it and accept that there are always some outliers? At least until the issue gets too big to ignore.
Danger: AI-Regulation
“Think of the Children!” has already become a trope politicians use to push their policies. To protect our kids, they push several policies, like restricting access to books. Building AI that targets kids opens the whole sector to significantly more regulation. Just think of the following: A political organization creates a fake video of a governor misbehaving. A kids app creates a fake video of a child eating laundry pods. Of course, the governor will call to ban all AI-created videos to protect children from eating laundry pods.
Danger: Consumer Trust
A significant danger comes from the loss of consumer trust. Pictures of kids in danger guarantee reasonable quotas for news media. Thus, any problem will be overblown and discussed ad absurdum by the talking heads on TV.
Yet, this discussion will ensure consumers lose trust in AI and the companies creating it. Thus, all the consumer fields where AI might be a force of the good come to a halt as users lose trust in the technology. The loss of confidence, in turn, will lead to consumers altering their behavior and spending less with the offending companies. The back and forth about Bud Light and the cost to the brand can be an excellent example of what can happen if a brand’s actions don’t align with its customers or mission. Yet, that involves a product targeted at adults. A similar boycott in a children’s company would be significantly worse.
AI and Kids: An Explosive Future
AI worsened many of the dangers information technology has brought into the world. Influencers, privacy violations, and impersonations aren’t new phenomena. Yet, technological advances have exacerbated the dangers. They have gone from time-consuming projects to afternoon activities. With AI, it becomes possible to hyper-target individuals at 3 cents. It is up to the creator whether they use technology for good or nefarious purposes.
Yet, using AI to target children will open our businesses to a significant level of moral criticism. As the past debates about DEI have shown, this comes with a considerable risk of regulation and public outcry. We can easily avoid the risk by making the right decision and limiting the interaction of AI and children.