cosa-logo_306
artboard-1

The First Word: Uncovering bias in AI and cultivating inclusive hiring practices

The influence of technology on both enterprises and employees is growing more profound and pervasive by the day. As businesses race to jump on bandwagons, especially those of fast-evolving technologies such as Generative AI, conversations on ethics, fairness, and equity in the use of such technologies are often overlooked. Things within the human resources (HR) and talent acquisition landscape are no different as organisations are pivoting from traditional hiring practices to automated ones for strategic improvements.

The question is not whether your company should or should not be using AI in hiring practices; per Gartner, 81% of HR leaders have explored or implemented artificial intelligence (AI) solutions to improve process efficiency within their organisations. With such a vast majority of companies already onboard with AI, it is crucial to consider whether these technologies are reliable enough to instil the same fairness and equity that employees consider non-negotiable today. One of the most intriguing aspects of this technological introspection is the revelation that biases get inevitably encoded into AI systems at the programming level. This realization prompts us to dive deep into the interplay between technology and human psychology, particularly when it comes to hiring decisions.

Biases Embedded in AI: A Hidden Threat

AI programs have a proven track record of facilitating faster and more accurate decision-making across business functions. However, complications seep in when complex factors such as pay, prejudice, equity, preference, and law, are integrated with data and algorithms to shape an employee’s future. Considering global employment patterns, where organisations have favoured men and/or white individuals to fill most roles across industries, basing AI algorithms on such data can echo erroneous hiring decisions of the past.  Furthermore, most of these programs run on natural language processing (NLP) models which are based on words or vocal sounds of native English speakers. As a result, poorly worded job descriptions can lead to the exclusion of qualified job candidates, especially when an applicants’ own language varies from that of the core database or contains different biases.

In addition to encouraging bias, AI programs do not account for societal biases. For instance, tools that claim to measure tone of voice, expressions, and a candidate’s overall personality to understand how culturally “normal” they are, may exclude candidates with disabilities, or those who do not fit what the base algorithm determines a typical candidate. Disabled candidates are at an added disadvantage when dealing with such tools—they may be forced to disclose disabilities before the interview process so they can get the necessary accommodations.

Therefore, any lack of appreciable diversity in the program’s data set across multiple parameters can mean flawed hiring results.

Navigating Bias Mitigation

As AI adoption picks up pace for HR and TA operations globally, removing bias from the equation will require a more hands-on approach from recruiters. Starting at the source is key—recruiters must thoroughly audit the data of their AI algorithms and ensure it reflects different ages, genders, ethnicities, and experiences. Including a diverse pool of individuals at the algorithm development stage and instilling transparency through explainable AI (XAI) tools can drive better results as well. XAI tools help provide insight into how and why a particular decision was made by an AI tool, highlighting the system’s underlying logic. This helps identify discriminatory practices and enables employers to course-correct promptly.

Striking a balance between AI and human involvement in hiring decisions is crucial in mitigating bias while ensuring a speedy, efficient hiring process. While AI helps narrow the candidate pool based on pre-decided criteria, the final decision to hire should involve the organisation’s HR managers. HR management can compensate for AI’s inability to exercise empathy and subtlety in judgement in the current context of society, and offer a more holistic view of the candidate. Human intervention is also critical to creating a diversity, equity, and inclusion (DEI) focused culture on office floors—one that ensures removal of discriminatory practices  from the hiring process, especially the parts led by AI.

Fighting Fire with Fire

Cases of AI missteps in recruitment and hiring are seeing a concerning rise across organisations globally. That being said, the reality is that there are many benefits to involving AI if we proceed responsibly. Ethical AI plays a significant role in removing unconscious human biases during the screening process once it’s taught to judge candidates solely on relevant skills and experience, rather than personal backgrounds or identity. Any potential loopholes can be accounted for by prioritising blind assessments that do not reveal the candidate’s name, gender, or other demographic information. This generates value for both the job seeker and employer.

AI can also benefit recruiters by taking the reins on administrative, transactional tasks with low administrative value-adds and help reinvest that time into human interactions. For instance, traditional recruitment workflows rely on humans reviewing CVs/application forms before progressing shortlisted candidates to the next step of the process. This often decreases the time recruiters need to spend on employment processes and introduces more modern automation-driven processes that leverage tech and AI that enable candidates to complete multiple stages and receive  real-time feedback. This is called ‘always on’ recruiting and gives better experiences for job seekers.

A Diverse and Inclusive Workforce through AI

The potential benefits of AI are clear and material. But as with any new technology, there are many careful considerations that cannot be ignored. Current AI models are “trained” on vast amounts of publicly available existing data. This means they can reproduce biases in the training data, which could upset diversity and inclusion efforts.

Over the coming years, recruiters will need to consistently delve into the complex relationship between technology and human cognition to ensure equitable decision-making. Acknowledging the potential for bias to seep into AI systems is the first step towards creating solutions that ensure fair and unbiased hiring practices. Through deliberate and cautious AI use, organisations can foster an environment where every candidate is evaluated on their merits, and build a workforce that champions diversity and thrives on inclusion.


 

About the authormatt-jones_cielo

Over the last 20 years, Matt Jones, Chief Product Officer, has worked with some of the world’s leading organizations – helping them shape their Talent Acquisition strategy and ultimately illuminate their target talent. Matt has driven operational, technology and product strategy globally, in his time with Cielo, provider of global talent acquisition and management solutions.

Share This Article

Facebook
LinkedIn
Twitter

Advertise Now

Pricing
Click to zoom
What's in it for you?
Click to zoom

WELCOME TO
Chief of Staff Asia