Author: Gautam MainkarPosted On May 17, 2019 | 4 min
The HR function in enterprises is in the middle of a revolution, with AI playing an increasingly important role in facilitating processes and decision-making. Many organizations now leverage AI to improve the efficiency of their recruitment, performance management and succession planning processes. AI is particularly well-suited to this role because it can serve to automate low-level decision-making (filtering CVs by comparing against a job posting, compiling language-based feedback into numeric ratings, discovering similarities between different profiles.), allowing HR professionals to better utilize their time by applying their expertise to a smaller problem set.
However, organizations that blindly adopt AI without being aware of its limitations run the risk of having a system that delivers bad or incorrect results. The impact of this can range from simply missing out on quality candidates to more serious issues like being in violation of regulatory compliance that can lead to legal trouble. Perhaps the most famous recent example of this is Amazon using an AI-powered recruitment system that was found to be discriminating against women candidates. This example perfectly illustrates some of the issues to which AI-powered systems are susceptible.
The difference between AI and traditional programming paradigms is that AI systems are expected to generate results based on rules that they themselves discover from data. Because of this, AI can be used to generate results in scenarios where humans find it difficult to create a set of rules to deliver results. For example, it would be extremely difficult for a programmer to write a set of rules to find the perfect candidate for a particular job — the number of different factors and variables makes this far too complex. On the other hand, AI systems can be designed to go over the historic hiring data and generate their own rules for identifying whether a particular candidate is fit for a job. However, because the AI is simply creating rules that most closely match the results in historical data, it has some limitations:
- It does not account for changes in context or user expectation, it can only extend historical trends
- It does not account for regulatory compliance/best practice considerations
- It does not map output to any real-world results, which makes it difficult to understand why a decision has been taken
Organizations which set up AI systems need to create a plan to address these limitations, and ensure that their AI systems are operating in concord with the organization’s strategy. Some basic techniques can help organizations control AI system operations:
- Hypothesis testing: Hypothesis testing is a process by which data scientists can investigate how the AI generates specific results. This can help identify scenarios in which the AI is non-compliant with organizational strategy. For example, hypothesis testing can be used to answer questions like ‘Does the algorithm prefer male candidates to female candidates, controlling for all other profile attributes?’
- Feature engineering: Feature engineering is a technique used to engineer the dataset on which the algorithm operates. It can be used to give additional weight to attributes which are important to the organization’s strategy and attenuate the impact of attributes which are not. For example, a recruitment dataset can be tweaked to increase the importance of features like education and professional experience, while attenuating or completely eliminating the impact of features like age, gender and race on decision-making.
- Bias analysis: Bias analysis is a statistical analysis method that can help data scientists explore the results generated by an algorithm. This is especially important in the HR domain because bias in the algorithm can exist even in the absence of non-compliant attributes. For example, AI recruitment systems are often found to discriminate against women even when it is not given candidate gender as a data point. This is because men and women tend to speak differently about their achievements — men tend to emphasize personal achievements while women tend to emphasize collaboration and teamwork. Bias analysis can help identify and control such instances of ‘deep’ bias.
- Manual oversight: At the end of the day, AI cannot replace the judgement and expertise of an experienced HR professional. This is why it is important to ensure that the results of AI systems are occasionally reviewed by humans, and the decisions taken by AI are not blindly followed by people in the enterprise.As AI begins to play an increasingly important role in the organization, it is important to remember that AI systems are not simple ‘plug-and-play’ systems and must be carefully integrated into the organizational tech stack. The use of AI offers organizations great benefits in efficiency, but also exposes them to significant risks, and organizations must have well-defined setup, tuning and review processes to ensure that their AI infrastructure adheres to their AI strategy.