A white robotic hand picking a black human figure from a row of silver metal figures on a wooden surface. This 3D illustration represents artificial intelligence in recruitment, automated talent selection, and the future of work in human resources.

AI in Hiring: Bias, Discrimination Risks and Merging Liability Considerations

Artificial intelligence (AI) is playing an increasing role in recruitment across Australia, with many organisations adopting AI‑enabled tools to assist in screening résumés, ranking candidates, conducting online assessments or analysing video interviews. These technologies can support efficiency and consistency, particularly where employers manage high application volumes.

However, the use of AI in hiring also introduces evolving liability considerations, especially where recruitment outcomes are questioned on the basis of potential bias or discriminatory impact. For insurance brokers, this topic is becoming increasingly relevant in discussions with clients seeking to understand how AI‑assisted decision‑making may intersect with their risk profile.

Efficiency and Emerging Exposure

AI tools typically rely on historical data, predictive models and algorithmic pattern recognition. While such systems are often positioned as objective, the decisions they generate can still reflect limitations or gaps in the data they were trained on.

Where inputs are narrow, incomplete or shaped by past hiring practices, AI‑driven processes may inadvertently disadvantage some groups of candidates. This can occur even when no protected attributes are intentionally considered.

From a legal standpoint, this distinction is significant. Under existing Australian employment and anti‑discrimination laws, liability may arise based on outcomes, regardless of intent. If an automated process results in decisions that have an unfair adverse impact on individuals with protected attributes, an employer may face allegations of unlawful discrimination, depending on the circumstances.

Accountability Remains with Employers

A common misconception is that liability transfers to third‑party vendors when employers use AI hiring tools. In practice, employers generally remain responsible for recruitment decisions, whether these decisions are made by people, technology or a combination of both.

Claims or challenges may arise where:

  • A candidate is excluded due to an automated process
  • The employer cannot adequately explain how a decision was reached
  • There is limited human oversight or review
  • A pattern of bias or adverse effect is identified after implementation

Because AI systems can act at speed and scale, unintentional issues may affect many candidates before they are detected.

Insurance Considerations for Clients

AI‑assisted recruitment may potentially intersect with several insurance lines. Examples might include:

  • Employment Practices Liability (EPL)

Allegations relating to discrimination, unfair treatment or failure to provide equal opportunity may arise from certain hiring processes, including those that involve automation.

  • Management Liability / Directors & Officers (D&O)

Boards and senior leaders may face scrutiny regarding governance, oversight and risk management practices. Adoption of AI without appropriate processes, controls or documentation may raise questions around diligence.

Reputational Considerations

Hiring practices sit at the intersection of culture, compliance and brand. Public allegations relating to biased or opaque recruitment processes may pose reputational challenges for some organisations.

While at the time of writing, Australia does not have AI‑specific legislation, existing laws relating to privacy, consumer protection, workplace and directors duties (amongst others) will be used to regulate AI risk and opportunity., However, from December 2026 entities required to have a Privacy Policy will need to disclose in that policy how they use personal information for automated decision making ( i.e. screening candidates as part of the recruitment process).

As AI becomes more embedded in business operations, scrutiny of governance, transparency and oversight is likely to grow.

Discussion Points Brokers Can Explore with Clients

Brokers can assist clients by helping them consider how AI‑enabled hiring fits within their governance, operational and insurance frameworks. Useful areas of discussion include:

  • Whether AI is used in any part of recruitment or candidate assessment
  • How automated decisions are reviewed, challenged or overridden by humans
  • Whether employment practices insurance reflects current hiring processes
  • How governance, documentation and controls support defensible decision‑making

For many organisations, the adoption of AI has evolved gradually, sometimes without a full assessment of associated liability implications. Proactive discussion can assist in aligning insurance programs with operational reality. Brokers may support clients in considering insurance implications however, governance and legal frameworks remain the responsibility of the client.

A Risk Landscape That Continues to Develop

AI has the potential to contribute to more consistent and efficient hiring decisions when implemented responsibly, transparently and with appropriate oversight. As automation plays a greater role in people‑related decisions, expectations around accountability and fairness are also evolving.

For brokers and insurers, understanding how AI in hiring may give rise to employment‑related exposures remains increasingly important. While technology may change the process, responsibility and potential liability remain with the organisation.

The establishment and implementation of AI governance and compliance frameworks by businesses is critical to managing and mitigating potential liabilities.

The extent to which the use of AI in hiring practices and processes creates potential insurance exposures will not only depend on the particular circumstances and the terms, conditions and exclusions of the policy but equally, the potential for insurers to revise underwriting processes, require specific endorsements or amend coverage scope in the future to reflect the emerging and actual exposures driven by AI hiring activities.

Important Notice

Berkley Insurance Company (limited company incorporated in Delaware, USA) ABN 53 126 559 706 t/as Berkley Insurance Australia is an APRA authorised general insurer. Information provided is general only, intended for brokers and has been prepared without taking into account any person’s particular objectives, financial situation or needs. Insurance cover is subject to terms, conditions, limits, and exclusions. Underwriting criteria applies. When making a decision to buy or continue to hold a product, you should review the relevant policy documents.