AI in HR: What We Must Get Right Before It Gets Wrong

4–6 minutes

The risks of AI in hiring became evident years ago, when Amazon discovered that its recruitment algorithm consistently downgraded applicants who mentioned women’s colleges or female-led groups on their résumés. Trained on ten years of historical data that reflected past hiring patterns, the system had “learned” that successful candidates were mostly men. (To be fair, Amazon’s workforce today is close to balanced – 45.8% female and 54.2% male.)

This was just one example of what can go wrong when we rely on poor-quality or biased data – without human oversight.

Now watch the world changing!

What was once an ethical debate has now become a matter of regulation. European Union introduced the Artificial Intelligence ActRegulation (EU) 2024/1689 – the world’s first comprehensive, legally binding framework for AI systems.

This regulation is not just about controlling technology. It defines the legal and ethical boundaries for how AI can be integrated into the European economy and society – including in the workplace.


What the EU AI Act Means for HR Practice

For HR professionals, the AI Act changes the way technology can be used in workforce decisions. Whenever AI is involved in managing people – from hiring to performance reviews or workforce monitoring – it is likely to fall within the high-risk category under the regulation. Being classified as high-risk comes with specific legal obligations for employers.

The Act also has extra-territorial reach: companies based outside the EU are in scope when their AI systems make or influence decisions about workers in the EU. For example, a U.S.-based employer using a global performance tool that includes EU employees falls within the Act’s scope.

Worth noting: Non-compliance in HR-related AI use can result in fines of up to €15 million or 3% of global annual turnover – mainly for breaches related to data quality, transparency, or human oversight. Up to €35 million or 7% of the company’s total global annual turnover (whichever is higher) – for the most serious infringements, such as using prohibited AI systems (e.g. emotion recognition in the workplace, manipulative or discriminatory AI).


Which HR-Related AI Systems Are Considered High-Risk?

According to Annex III of the AI Act, the following applications are classified as high-risk:

  • Recruitment and selection: AI tools used to assess candidates’ suitability for a role (e.g., CV screening, video interview analysis, ranking algorithms)
  • Performance management and decision support: Systems that evaluate employee performance or make recommendations on promotion, termination, or compensation
  • Education and training: AI-based testing or assessment systems used to evaluate learning outcomes, training performance, or certification results – including those applied in corporate learning environments
  • Task allocation based on personal traits or behavior: AI that assigns work based on behavioral patterns, productivity data, or psychometric profiling

These systems fall under the highest level of regulatory scrutiny – and by August 2026, full compliance will be mandatory.


What HR Must Get Right to Stay Compliant

From February 2025, the use of AI systems designed to detect or interpret emotions in the workplace is explicitly prohibited.

By August 2026, transparency becomes essential: employees and candidates must be informed in advance whenever AI tools are used to make or support decisions about them.

Because most AI models learn from historical data, organisations will also need to ensure that the datasets used to train or operate these systems are accurate, representative, and free from bias. Otherwise, the risk of unintentional discrimination becomes significant.

High-risk AI systems must remain under human control. HR professionals and managers should be able to monitor, override, or stop the system when necessary – and fully understand its limitations. Compliance also requires a strong documentation trail: every AI system used in HR must be supported by clear records showing how it functions, how data is handled, and how risks are managed.

To protect both employees and organizations, incident reporting procedures should be in place to capture and respond to AI errors, data breaches, or potential bias. The Act also introduces a new expectation around AI literacy – staff who use or oversee AI must receive appropriate training to understand how these systems work and the responsibilities that come with them.

Finally, it’s important to see the AI Act not in isolation but as part of a broader legal landscape. It builds on existing frameworks such as GDPR, anti-discrimination law, and collective labor regulations, reinforcing a central principle: technology in the workplace must always serve fairness, transparency, and accountability.


Five Steps to Build HR’s AI Readiness

The Act may take effect in stages, but preparation cannot wait. Here are five steps HR can take now to prepare:

1. Identify HR processes involving AI. Map where AI supports hiring, evaluation, learning, task allocation, or monitoring – whether explicitly (e.g., in recruitment software) or indirectly (e.g., analytics, learning tools, workflow automation).

2. Clarify your organization’s role and responsibilities. Most HR functions will fall under the category of deployers. However, an organization can become a provider if it modifies an existing AI tool, changes its purpose, or rebrands it under its own name. In such cases, the company assumes full provider-level obligations, including technical documentation, conformity assessment, and AI system registration in the EU database.

3. Build a structured compliance roadmap. Create an internal action plan that covers transparency, oversight, and governance. Include risk assessment, data quality assurance, employee communication, and incident reporting.

4. Strengthen AI literacy and leadership awareness. Train HR and leadership teams to understand AI’s logic, limitations, and responsibilities.

5. Review vendor contracts and external partnerships, Ensure that your suppliers comply with the AI Act, cooperate during audits, and commit to continuous improvement.


AI Doesn’t Make Decisions – People Do

As AI becomes part of everyday HR practice, the question is no longer whether we shall use it – but how we should. The EU AI Act sets the boundaries, but it’s HR that will define the standards.