Hiring is an integral but time-consuming, expensive and often tedious process with which every company must contend. In looking for ways to cut down on exhausting searches, an increasing number of companies are turning to artificial intelligence (AI) systems to help more quickly identify qualified candidates. This can prove especially beneficial for firms that need to cut through a huge influx of potential applications.
However, there is growing concern that such AI application systems may be perpetuating some forms of discrimination, particularly age, gender and race. This is in spite of tech companies insisting that their systems are designed to root out long-existing human biases.
Last year, Amazon tested integrated machine-learning techniques on its recruiting platform. This was supposed to be a “smart tool” that could help managers pick ideal job candidates faster. However, after being fed a decade’s worth of resumes, the system began showing a clear bias toward male candidates. In troubleshooting, engineers with the company figured out that because most of the resumes were coming from male candidates, it made the “artificial intelligence” leap that male candidates were more desirable, and thus downgraded the ratings of female applicants. Engineers addressed this by editing the programs so that they included more gender neutral terms. However, that doesn’t mean these systems won’t still prove discriminatory – now and in the future. (Amazon decided to ax the project before fully launching it, perhaps realizing the potential legal liability landmine.)
It’s no leap to surmise similar discriminatory patterns could soon emerge.
If, for instance, a firm has a general tendency to hire fresh-out-of-college candidates, these systems could easily begin trending toward a younger workforce bias. Continue Reading ›