AI in Employment Decisions: Legal Risks and How to Address Them in Vendor Contracts

Article

Artificial intelligence has become commonplace in recruiting, screening, interviewing, testing, promotion, and employee monitoring. Properly designed and governed, AI can streamline processes and improve consistency. However, in employment decision-making, AI can introduce legal and operational risks for the employer, even when the AI tools are built and operated by third-party vendors. Businesses should understand where and when liabilities may arise and use vendor contracts to mitigate and allocate those risks before deploying AI as part of employment decisions.

Legal Risks in Using AI for Employment Decisions

A legal risk in using AI as part of employment decisions is that AI tools can encode or amplify historical bias. Disparate treatment claims can arise where systems use or infer protected characteristics such as age, race, religion, sex, disability, or genetic information—either directly or through proxies like geography or graduation dates. Disparate impact claims can follow when “neutral” criteria disproportionately affect protected groups and cannot be justified as job-related and consistent with business necessity, or when less discriminatory alternatives exist. Employers cannot avoid liability by pointing to a vendor’s algorithm as the cause of the legal violation.

Defensibility of employment decisions is complicated when the AI tool used is “black-box” – e.g., where the AI tool’s workings are not visible or understandable to users. Employers may be unable to articulate legitimate, nondiscriminatory reasons for adverse actions when the only explanation is a score. Employers may struggle to prove job-relatedness and business necessity without transparency into the AI tool’s features, training data, and model logic. If employers cannot examine or reproduce how an AI tool functions and why it generated a given outcome, they risk failing the Uniform Guidelines on Employee Selection Procedures’ (UGESP) validation and recordkeeping expectations.

Additionally, disability discrimination risks are increased when AI tools impose barriers to individuals with disabilities—for example, video or voice analysis that penalizes speech, hearing, or neurological differences. The ADA requires reasonable accommodations, accessible platforms (for example, conformance with WCAG standards), and alternative processes. Failing to provide accessible accommodations or to offer meaningful human review and individualized assessment can result employer liability.

Businesses should also be mindful of complying U.S. state AI laws. Illinois’s AI Video Interview Act requires notice and consent for AI analysis of interview videos. Maryland’s facial recognition law requires consent for interviews using AI. New York City’s Local Law 144 imposes annual bias audits for automated employment decision tools, mandates public disclosure of audit summaries, and requires pre-use notices to applicants and employees. Colorado’s SB 24-205, effective February 1, 2026, regulates “high-risk” AI systems, including many employment uses, and it imposes reasonable care obligations, impact assessments, risk management, and disclosure of algorithmic discrimination risks on both developers and deployers.

While this article focuses on U.S. law, employers with global operations should note that GDPR imposes strict transparency, data minimization, and purpose limitation rules. The EU AI Act classifies many employment tools as “high risk,” triggering obligations around risk management, data quality, human oversight, documentation, and monitoring of the AI tool as it is in use.

Addressing Legal Risks Through AI Vendor Contracts

Contracting with AI vendors can help businesses reduce, allocate, and manage risk. Agreements should be tailored to the ways in which the employer intends to use the AI tool, the jurisdictions in which it does business, the residences of its employees or potential employees, and the businesses’ risk tolerance. Vendor contracts should include mechanisms for validation of the AI tool, explainability, accessibility, privacy, and governance expectations.

Elements to consider including in vendor contracts involving AI tools used for employment decisions include:

  • Vendors to provide:
    • Model documentation (for example, model cards, data sheets),
    • Validation studies,
    • Bias audits and recurring bias testing, and
    • Results of risk and impact assessments;
  • Right to conduct or retain a company to perform third-party audits;
  • Pre-deployment validation and pilot use of the AI tool;
  • If bias arises, require immediate mitigation steps, suspension of use where appropriate, and cooperation in remediation plans;
  • Reproducibility of AI decisions made;
  • Limit collection of personal information and use to what is necessary;
  • Prohibit secondary use or training on the personal information supplied by the employer, unless the business provides written authorization;
  • Security controls;
  • Immutable decision logs capturing inputs, model versions, and outputs for each decision;
  • Indemnities covering regulatory investigations and violations of applicable law by the AI tool;
  • Assistance with transferring data upon termination of the agreement.

These are just some of the elements that businesses should consider including in contracts with AI vendors providing AI tools that are used in employment decisions. Key priorities are making sure AI assisted employment decisions are fair, unbiased, can be explained if challenged, and records are retained as needed.

Related Capabilities

Burr
Jump to Page
Arrow icon Top

Contact Us

Cookie Preference Center

Necessary Cookies

Always Active

Necessary cookies enable core functionality such as security, network management, and accessibility. These cookies may only be disabled by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.