The past year has seen AI join many different areas of our businesses. From copywriting to fitness coaching, there are almost no areas AI hasn’t touched. Consequently, Human resources departments and recruiting use AI as part of their strategies. It plays into the perception that in a world of biases and prejudice, machines are the neutral arbitrators. Yet, with AI learning from our past and the behaviors ingrained into it, we have to develop a new awareness about how our past decisions as a society will influence the future. Thus, let us look at three common problems with AI in recruiting and why boards and HR departments should be wary of including it in their talent strategy.
Steve, who Played Water Polo in College
One of the biggest problems with using AI is that we don’t know what makes an excellent entry-level employee. Someone who graduated from high school or college a month ago might have shown grit in the classroom, overcome problems, or been particularly studious, yet we don’t know how that translates into the workplace. Worse, hundreds of data points are associated with every recruit that might or might not relate to future job performance. Thus, the data might very well show that an employee named Steve, who played water polo in college, is the best choice for any position.
However, it might also be a simple correlation. As the saying goes: “Correlation doesn’t imply Causality.” Yet, if we don’t understand the association of data points and the model AI builds, we might fall victim to these random artifacts. While they are fantastic to laugh about after work, they are less entertaining if the company is on the line for mishaps.
Without understanding the scientific requirements of data sanitation and the random noise involved, any AI might run on last week’s horoscopes to predict performance.
Past, Present, and Future in Recruiting
We train artificial intelligence on past data. Yet, past data often includes all past inequalities, even if we don’t see them today.
Let’s take the blatant sexism of the 1960s. It was almost impossible for women to enter a career as a staff accountant. Consequently, there were significantly fewer female CFOs in the late 1980s and 1990s due to the past supply of entry-level employees. Thus, the 2010s have fewer female board members.
The possibility of having a career 50 years ago influences the top levels of any company today. Yet, feeding the same data into the AI might result in a selection bias that men are more qualified.
However, this is the most simple association of the problem. As humans, we look for role models and mentors to emulate. Thus, the absence of female CFOs in the 1990s might also influence the number of entry-level accountants. If we extrapolate training data without understanding it, AI might conclude that there shouldn’t be too much growth in female finance staff.
As individuals, we have similar associations in our minds and are often unaware of them. Yet, no one would assume that a person is a fair judge of character. Yet, our individual and societal prejudices live on in the training data.
Insufficient Room for Growth
One hundred years of Toastmasters shows the importance of personal growth and professional development. Yet, many of the development opportunities in the past helped along a straight career trajectory. Switching careers between different fields and unrelated industries has only become a mainstay of our economy in the past 20 years.
At the same time, the Internet has opened up significantly more career development opportunities and tools. From online MBAs to video courses in obscure yet highly in-demand fields, the ascent of easy content delivery has changed the opportunities to learn.
So far, AI cannot model a person’s wishes and dreams based on solid data. Thus, it is hard to interpret how much someone will put into their personal growth and how they will direct that energy. Thus, there is little possibility for AI to “take a chance” on an applicant.
Unwavering Belief in Unbias Computer Recruiting
We often believe that there is no bias in computers. That machine is cold and filled with logic. Yet, it is humans who design systems and whose experience and world views influence the logic. With AI, it isn’t just the design but internalizes all of our biases in the training data. While we might be able to remove the direct faults, our world is still the result of the choices and prejudice of the past centuries.
Unless we can build awareness of it in the machine or create artificial training data free of biases, we should never see AI as an impartial judge of human character. Consequently, we shouldn’t rely solely on AI to see if a prospective employee is an excellent cultural fit for our companies. Otherwise, we might end up with a water polo team of Steves.