Sociology

Daily Sociology Blog: Getting a Job: Working in AI

Daily Sociology Blog: Getting a Job: Working in AI
Written by admin

Karen Sternheimer

I have been fortunate to have been in my job for over twenty years. I have never looked for a job in the twenty-first century. If I did, the process would be much different than it was in the 1990s. Monster.com, the first online resume database, opened only in 1999. And while there may have been job postings online, old-fashioned mail was the main way to apply for jobs for years after that.

In the 20th century, writing a good CV was the most important thing. This is still the case today, but an algorithm will most likely be the first to “see” your resume. In theory, this is meant to help streamline the hiring process and possibly even get better candidates. Even the first interview can be presented as a video, watched by a robot to read the candidate’s facial expressions and keywords used.

This practice represents what Max Weber may have considered a form of rationalization, a means of increasing efficiency in large bureaucratic organizations. Contrast this with the time-consuming process our department experiences when seeking employment: a committee of three or more faculty members reviews hundreds of applications for a single faculty position. This can be a long process that requires several people and many other responsibilities.

Some candidates may be immediately disqualified for not having the minimum qualifications. I went through the applications once and sometimes some people will apply to sociology professors without a degree – even a bachelor’s degree. However, for the most part, we read each qualified candidate’s materials packages, including letters of recommendation and publications.

Although this work is time-consuming, we did not even consider switching it to an algorithm, even to start the process. According to a Harvard Business Review report, hiring algorithms can be very problematic:

In order to attract candidates, many employers use algorithmic ad platforms and job boards to reach the “most important” job seekers. These systems, which promise employers to use recruiting budgets more efficiently, often make extremely superficial predictions: not who will succeed in a position, but who is most likely to click on that job ad.

These predictions may lead to job advertisements being presented in ways that reinforce gender and racial stereotypes, even when employers have no such intentions. In a recent study with colleagues at Northeastern University and USC, among others, we found that widely used Facebook ads for supermarket cashier positions were shown to an 85% female audience, while taxi companies employ an audience that is roughly 75% female. were black. This is a quintessential case where an algorithm reproduces biases from the real world without human intervention.

The World Economic Forum has noted similar issues:

It has been shown that African-American names are systematically discriminated against in the U.S. labor market, with whites receiving more rejections for interviews. However, we see bias not only due to human error, but also because increasingly the algorithms used by recruiters are not neutral; rather, they reproduce the very human errors they are supposed to eliminate. For example, the algorithm that Amazon used between 2014 and 2017 to screen job applicants penalized applicants’ resumes for words like “women” or the names of women’s colleges.

These forms of artificial intelligence, or AI, reflect and even reinforce existing biases in the workforce. They don’t end with the rental process. How Los Angeles Times. In a recent column discussed, Uber and Lyft drivers were “deactivated” — in bot-speak — terminated — presumably due to an algorithm:

A new survey of 810 Uber and Lyft drivers in California shows that two-thirds have been disabled at least once. Of those, 40% of Uber drivers and 24% of Lyft drivers were permanently fired. A third never received an explanation from the concert program companies.

Drivers of color saw higher shutdown rates than white drivers, 69% to 57%, respectively. The vast majority of drivers (86 percent) experienced financial hardship after being fired from the app, while 12 percent lost his home.

The shutdown affected even the most experienced drivers: A report by Rideshare Drivers United and the Asian Law Caucus found that drivers who were shut down had an average of 4.5 years with Uber and four years with Lyft.

The World Economic Forum article concludes with suggestions for workers on how to create a good AI-readable resume, but that means workers have to outsmart the algorithm somehow. The title of the article link, AI-assisted hiring is biased: Here’s how to overcome it, means it can be overcome. This seems unlikely. Given the complexity of any algorithm, they are essentially proprietary; translation: users usually don’t know exactly what they’re looking for.

Of course, artificial intelligence is not going anywhere and can improve our lives. New research into the use of AI for medical diagnoses could save lives or reflect existing health care inequities. Instead of blaming artificial intelligence, the solution is based on humans, and systemic inequities need to be explored and considered how they can be removed from algorithms. Maybe someone will create an algorithm for this.

About the author

admin

Leave a Comment