US: New York City Regulates Artificial Intelligence in Employment Decisions

US: New York City Regulates Artificial Intelligence in Employment Decisions

Employer Action Code: Monitor

The growing use of artificial intelligence (AI) in employment-related decisions has prompted the New York City government to regulate its use by employers, particularly due to concerns about possible unequal treatment of candidates for employment due to the programming or operation of the AI. New York City Local Law 144 (LL 144) will come into effect on January 1, 2023 and will require employers using automated employment decision tools (AEDTs) for hiring and promotions to meet a bias audit requirement and provide reviews and information about the results of the audit and the use of the AEDT. The proposed rules were published in September and a hearing was held on November 4, 2022. It is unclear if the final regulations will be published before the end of 2022 or if the effective date will be delayed. Other jurisdictions, in the United States and around the world, are also in various stages of combating the use of job-related AI.

Key details

New York City’s LL 144 defines an AEDT as a “computational process, derived from machine learning, statistical modeling, data analytics, or artificial intelligence, that emits simplified output, including a score, classification or recommendation, which is used to assist or replace discretionary decision-making in making employment decisions that impact individuals”, but excludes tools that have no no impact on the decision-making process (such as spam filters and anti-virus software). LL 144 prohibits the use of an AEDT unless:

  1. A bias audit is carried out in the year following its use
  2. The results are made public
  3. Notice is provided to job applicants regarding the use of AEDTs
  4. Applicants or employees are permitted to request an alternative assessment process as an accommodation

The proposed rules address several issues regarding compliance with LL 144, including clarifications regarding the definition of an AEDT, the purpose of the bias audit, the data that must be made public, and compliance with the requirements of notification and disclosure. However, several questions remain unanswered, including (1) which entities are authorized to perform the bias audit, (2) whether the audit must be provided annually, and (3) the definition of an alternative assessment process. or the types of options that should be made available.

Several US states (e.g., Illinois and Maryland) and some cities have enacted or are considering legislation that could impact the use of AI in hiring and other employment decisions. In the European Union, the European Commission is drafting an artificial intelligence law to regulate the use of AI in general. The law would divide the use of AI into four broad categories of risk (to citizens’ rights):

  1. Unacceptable risks, such as the use of AI in social rating by governments.
  2. High-risk uses, such as school or vocational training, employment, worker management, and remote biometric identification systems.
  3. Limited risk applications with specific transparency obligations (eg an obligation to inform users when they interact with AI such as chatbots).
  4. Minimal risk AI, like spam filters. In the commission’s view, the vast majority of AI systems currently in use fall into the minimal risk category.

The US federal government has also focused on using AI in employment decisions. The Equal Employment Opportunity Commission (EEOC) released guidance in May 2022 outlining how certain employment-related uses of AI could potentially violate the Americans with Disabilities Act (ADA). In October, the Biden administration released a draft AI bill of rights intended to guide the design, use and deployment of automated systems. Brazil, Canada and the UK are working on similar laws and frameworks (as are other governments).

Consequences for the employer

The application of AI in employment is already well ahead of the development of regulatory regimes governing its use. The EEOC has estimated that more than 80% of U.S. employers use some form of AI in their work and employment decision-making. Employers should monitor evolving legal restrictions and requirements regarding the use of AI in employment-related decisions. For employers with employees in New York, the New York City law is currently expected to go into effect in 2023; this can be a good test to show how regulation can affect the use of AI in job-related decision-making.

#York #City #Regulates #Artificial #Intelligence #Employment #Decisions

Leave a Comment

Your email address will not be published. Required fields are marked *