5 Things to Think About When Using AI in Employment and Recruiting

by: Dave Radmore

Artificial Intelligence systems can be extremely powerful in streamlining an employer’s hiring and recruitment practices. And many HR service providers have implemented all sorts of new and exciting AI products to offer to their employer clients. But, although the flood of AI laws and regulations in the United States over the past year have largely focused on consumer-side uses of AI, processing employee and candidate data with AI has not been immune to regulatory developments. In this post, we address five things for companies to consider to ensure that they are using any AI system in their employment and hiring practices in a compliant way – keeping in mind that these laws are constantly evolving.

1.     Use AI as a tool, not a replacement for human decision-making.

One of the biggest factors in the level of risk that use of an AI system presents to a company is the extent to which human decisions are delegated to the AI system. For example, a company’s obligations to comply with New York City’s Automated Employment Decision-making Tool (“AEDT”) ordinance apply when an AEDT (which includes not only AI systems but also other forms of algorithmic tools) is used to substantially assist or replace human discretion in hiring or promotion decisions. This means that, generally, if a company is using an AEDT for hiring or promotion, for example to screen resumes early in the recruiting process, it should ensure that the final arbiter in any decision taken by the company is made not by the computer but by a human qualified to make those decisions; further, the human decision maker should use a range of factors to make the decision and not rely solely or largely on the recommendation of the AEDT.

Similar principles are also seen in the Colorado AI act and the draft AI regulations currently being deliberated by California’s CPPA, both of which incorporate in differing ways the concept of allowing a person who is adversely impacted by a decision made by AI to appeal that decision to a human arbiter. Consequently, to reduce the regulatory risks posed by processing workers’ and candidates’ personal data, it is advisable that companies keep human discretion central to every step of AI processing decision flows.

2.     Will AI be training on your data?

Beyond the regular processing activities of the AI system, companies need to carefully think about what data the AI system is being trained on or enhanced with. Additionally, when companies are contracting with an AI developer to obtain or utilize a third-party AI system, it is important to be aware of whether or not the AI developer is seeking to train or enhance the AI system on any data that the company inputs into the AI system.

Once a piece of data is included in the body of data that the AI system has trained on, it can be extremely difficult to unwind that piece of data from the AI’s underlying training model. Given the massive scale of data ingested to train large language models and the way that data is stored within those models, it may require significant effort to identify the responsive data to be removed; and if the responsive data can be located, removing it has the potential to change the way the AI system processes data, potentially requiring re-training and of course revised output testing to ensure the system remains accurate and free from bias. As such, if the data to be deleted is personal data under an applicable privacy law, it can place a company in the unenviable position of having to bear significant costs to delete the personal data and retrain the AI system or face significant penalties for failure to properly comply with a data subject request.

In addition, when the AI system is provided by a third party developer, as is likely to be the case for most employers, using personal data to train the AI system could constitute a “sale” or “share” under certain privacy laws. One way this may not apply may be to contractually ensure that the training data is essentially ring fenced and only usable in connection with the company’s own instance of the third party AI system, but as discussed above it may be significantly costly to implement such ring fencing after a period of time in which the company’s data was used without such ring fencing it. It is therefore critical that companies carefully review their agreements with third party AI developers, including to make sure they fully understand how the third party developer will use their workers’ personal data in connection with the AI system and what permissions to that personal data are being granted to the third party developer, amongst additional considerations imposed by applicable AI laws.

3.     Are you prepared to handle opt-outs?

Except where sensitive data is in-scope, no state or federal law currently requires that companies using an AI system to process their workers’ personal data provide the right to opt out of such processing to their workers. However, the draft CCPA regulations currently being deliberated would include opt out rights for employees, and the Biden administration’s Blueprint for an AI Bill of Rights calls for companies to afford opt-out rights to individuals whose personal data is processed by AI. Clearly, the legislative trend is towards opt-out rights for workers whose personal data is processed by AI. Therefore, if a company has employees or other workers or hiring candidates and uses AI to process the personal data of its workforce, the company should start considering now how to implement opt-out rights so it is ready for the regulations when they go into effect; this is especially the case for any processing that may impact hiring, promotion, termination—or even allocation of work under the draft California regulations.

Companies will need to carefully consider where the AI system may encounter personal data of the person opting out, and how to ensure that the opt-out is properly implemented at every encounter. For example, a company might use the author information in document metadata to exclude any documents authored by an opted-out employee from AI processing, but if that employee’s personal data appears in the body of documents authored by others, then the opt out would not have been effectively applied. The good news is that there is still time to figure out how to practically implement opt outs, as the California regulations are not even in the formal draft stage at time of writing. But because of the complexity of the issue, the time to start thinking about these issues is now, not on the eve of the regulations going live.

4.     Do you have the necessary disclosures that there’s an AI being used?

There have been a number of state laws in the past year passed that require some form of notice to workers regarding the use of AI in the workplace. Colorado, Utah, New York City and Illinois all have passed AI laws that mandate, in varying ways, that workers be notified when an employer is using AI to process their personal data. And as mentioned, the regulations under deliberation by CPPA mean that California will soon follow. Companies that have workers located in these states need to ensure that they have provided any necessary notices and disclosures to their workforce before processing those workers’ personal data through AI systems.

5.     Are you testing for bias in the AI processing and outputs?

Bias and discriminatory impact are the hottest area of AI regulation currently when it comes to the use of AI in employment. Both the federal and state governmental bodies have identified the risk of AI processing in a discriminatory manner as one of the key harms that can affect employees and candidates in the workplace. For example, New York City’s law requires annual bias audits when AI is used to make promotion or hiring decisions. Illinois recently passed an amendment to its Human Rights Act to confirm that it is a violation of that law when an employer’s use of AI in hiring, firing, discipline and training results in discriminatory impact to protected classes. And the federal EEOC has issued guidance stating that employers using AI to make hiring, promotion and termination decisions are subject to federal Title VII anti-discrimination laws.

This means that an employer seeking to use AI in the employment context, whether to assist in the hiring process or to help make decisions about the workforce, must be careful to put in place controls to avoid bias in the AI processing and outputs. Many laws have made it clear that it is not enough to rely on any representations of the AI developer that the AI system is free from bias; employers that contract for an AI system developed by a third party would be advised to conduct their own bias monitoring independent from anything conducted by the developer.

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.