FLAME University

MEDIA

FLAME in the news

Application of AI for Hiring – Should We Be Concerned?

www.economictimes.indiatimes.com | December 19, 2020
Article Intro Image

Making the right call on hiring consistently is an exceedingly challenging task even for the most competent and experienced of recruiters. AI is certainly making the hiring process more efficient – but is it making it more effective too?

In October 2018, a Reuters report revealed that global e-commerce leader Amazon had to quietly discard an Artificial Intelligence (AI) algorithm, developed in-house for hiring employees – when they realized that the tool discriminated against women candidates, particularly for technology jobs. The Seattle, USA headquartered organisation, with more than 575,000 employees across the globe at the time of this report being published, had been using this contentious programme for years for its recruiting process. For a company renowned for setting benchmarks in the use of advanced technology in its operations, the evidence of gender-bias in its AI application must have been uncomfortable, to say the least. Whereas this exposure came as a nasty surprise to many, it gave credence to sceptical anti-AI activism on the other side. The report definitely intensified the already existing debate about the effectiveness of AI for hiring.

Actually, developments at Amazon did not surprise many experts who had already warned about the grey (or is it dark?) side of the ever-expanding application of AI in hiring. Around the world, the application of AI in staffing industry is now ubiquitous – and the practice is not only limited to large global MNCs like Amazon. Undoubtedly, human resource departments of organisations and staffing/executive search firms increased their efficiency, and saved a lot of time in sourcing, screening, and shortlisting potential workers – through the application AI programmes. Organisations across industries and geographies discovered the value added in their hiring process initially through advanced Applicant Tracking System (ATS), that not only cut down significant transactional time from hiring process, but also helped in developing an effective database for future staffing needs. Within a decade, AI-enabled systems like automated mail systems, chatbots, social media scanners, etc became commonplace as essential components of the hiring process. Today, AI-powered tools are being used to evaluate a candidate’s performance in personal interviews – voice and tone, body language, facial expression, eye movement, and even smile!

So, with all the developments linked to AI in hiring — where do they place recruiters? Acting as a critical link between an eligible candidate and an organisation looking to fill up position(s), a recruiter essentially attempts to achieve two objectives: (a) an optimum person-job fit, and (b) an ideal person-organisation fit, given the context and constraints. To achieve (a) and (b), the hiring process should warrant that organisations (within their limited resources) do not select a wrong candidate (who’d underperform later) and do not reject a right candidate (who could have performed well – if given a chance!). Essentially, recruiters are trying to predict performers vis-a-vis non-performers. Is AI necessarily making this prediction or hiring decision better?

Undeniably, smart algorithms enabled a much faster, easier and, in the long-run, cheaper way of screening candidate profiles and communicating with potential hires. But are we certain that these predominantly keyword-driven programmes are not missing out on some excellent candidates? What about CVs/resumes not best customized to a specific AI platform? What about eliminating bias from the screening process? As AI programmes essentially run on coded human input; learn from past data; evolve through stored experience – AI systems are not exactly free-of-bias, even though companies selling those would like us to believe so. Plenty of evidence and findings from different corners of the world indicate reinforcement of human bias through AI. Critics even argued that in a way, we are trying to legitimize, effectively perpetuate bias and discrimination through some AI programmes used for hiring.
 
Going beyond the question of an in-built bias, crucial questions on data security and privacy get prominence. The application of AI for extracting information from a candidate’s social media accounts that would be used later for staffing decisions – doesn’t that amount to snooping? Also disturbing is the latest phenomenon of conducting and evaluating interviews through AI programmes. Possible legal and obvious ethical matters aside, hundreds of job applicants feel uncomfortable, even repulsive about the idea of “being judged by a robot”. Most importantly, how can we be certain that AI-powered programmes can take better hiring decisions than an expert recruiter?

Making the right call on hiring consistently is an exceedingly challenging task even for the most competent and experienced of recruiters. AI is certainly making the hiring process more efficient – but is it making it more effective too? In an important article, Huang and Rust in 2018 classified AI in service into four categories – Mechanical, Analytical, Intuitive, and Empathetic – in an increasing order of sophistication. As pervasive application of AI in every organisational process looks inevitable, it is perhaps wise to walk with caution and care while working with higher (more sophisticated) orders in the hiring process.

-Prof. Diganta Chakrabarti, Associate Professor – Human Resources, FLAME University

(Source: https://hr.economictimes.indiatimes.com/news/trends/ai-in-hr/application-of-ai-for-hiring-should-we-be-concerned/79806588)