On August 1, 2024, the EU Artificial Intelligence Act ("AI Act") entered into force. The AI Act introduces a risk-based legal framework for AI systems that fall into four main buckets: (i) prohibited AI systems, (ii) high-risk AI systems, (iii) AI systems with transparency requirements, and (iv) general-purpose AI models. The AI Act applies to companies that are located in the EU. In addition, the AI Act has an extraterritorial reach. It applies to so-called AI "providers", which are companies that develop and place AI systems on the EU market (including general purpose AI models) or put AI systems into service in the EU under their own name or trademark, irrespective of the provider's location. The Act further applies to any situation where the output of the AI system is used in the EU, regardless of where the provider or deployer of the concerned AI system is located.
The AI Act's obligations will become applicable in phases. The provisions with respect to prohibited AI systems and AI literacy (see below) will become applicable on February 2, 2025. Specific obligations for general-purpose AI models will become applicable on August 2, 2025. Most other obligations under the AI Act, including the rules applicable to high-risk AI systems and systems subject to specific transparency requirements, will become applicable on August 2, 2026. The remaining provisions will become applicable on August 2, 2027.
How the AI Act Applies in Recruitment and Employment
The AI Act introduces obligations for high-risk AI systems that require preparation, implementation and ongoing oversight. Under Article 6(2) and Annex III of the AI Act, high-risk AI systems include:
Therefore, employers deploying AI systems for candidate screening, employee evaluation, and other employment-related decision-making in the EU must take appropriate steps to comply with the AI Act's requirements related to the use of high risk AI systems. There are, of course, other scenarios where the use of AI in the workplace could trigger certain obligations, but these are the most obvious and are those that will be most relevant to employers based on current-use cases.
Key Obligations Regarding High-Risk AI Systems
Employers who deploy high-risk AI systems in their HR activities must comply with the following key deployer obligations under the EU AI Act:
If a company is a "provider" of high-risk AI systems for use by deployers (employers) in their HR activities, it will be subject to more stringent "provider" obligations under the EU AI Act. This includes conducting conformity assessments, establishing and implementing a comprehensive risk management system throughout the lifecycle of the AI system, implementing data quality, data governance and data management requirements, maintaining comprehensive technical documentation of the AI system, providing adequate information to deployers of the high-risk AI system about how to operate the system safely (instructions for use), and implementing post-market monitoring.
Non-compliance with the AI Act can result in complaints, investigations, fines, litigation, operational restrictions and damage to a company's reputation. The GDPR continues to apply where AI systems process personal data.
Conclusion
Companies that use AI systems in the context of their human resources activities should take proactive steps to review their AI practices in light of the new requirements under the EU AI Act. This is required both to comply with the new law and to build trust with candidates and employees. Employers should implement the necessary compliance measures, such as drafting or reviewing AI governance policies and procedures and ensuring human oversight and transparency. The requirement to ensure AI literacy of staff members will take effect sooner than other obligations and should be prioritized, where possible.