Washington State
House of Representatives
Office of Program Research
BILL
ANALYSIS
Consumer Protection & Business Committee
HB 1951
Brief Description: Promoting ethical artificial intelligence by protecting against algorithmic discrimination.
Sponsors: Representatives Shavers, Ryu, Ramel, Gregerson, Macri, Duerr and Pollet.
Brief Summary of Bill
  • Requires developers and deployers of automated decision tools to annually complete and document impact assessments of automated decision tools beginning January 1, 2025.
  • Requires developers to issue statements and documentation about the intended uses of automated decision tools.
  • Permits the Attorney General to bring action on behalf of the state under the Consumer Protection Act. 
  • Prohibits a deployer from using an automated decision tool that results in algorithmic discrimination, and makes this a violation under the Washington Law Against Discrimination.
Hearing Date: 1/19/24
Staff: Megan Mulvihill (786-7304).
Background:

Consumer Protection Act.

The Consumer Protection Act (CPA) declares a variety of business practices unlawful.  These unlawful practices include engaging in unfair methods of competition, and unfair or deceptive acts or practices in the conduct of any trade or commerce; the formation of contracts, combinations, and conspiracies in restraint of trade or commerce; and monopolies.  The Attorney General may bring an action in the name of the state, or on behalf of persons residing in the state, against any person to enjoin violations of the CPA and to obtain restitution for persons injured by the violation.  The prevailing party may, at the discretion of the court, recover costs and attorney's fees.  The Attorney General may also seek civil penalties up to statutorily authorized maximums against any person who violates the CPA.    

 

Washington Law Against Discrimination.

The Washington Law Against Discrimination (WLAD) prohibits discrimination in the context of credit, public accommodation, real estate, and employment, in addition to other contexts.  The law protects persons from discrimination based on their race, creed, color, national origin, citizenship or immigration status, families with children, sex, marital status, sexual orientation, age, honorably discharged veterans, or military status.  The law also protects persons from discrimination based on the presence of any sensory, mental, or physical disability or the use of a trained dog guide or service animal by a person with a disability.  Violations of the WLAD are prosecuted by the Washington Human Rights Commission.  The WLAD also establishes that a person injured by any act in violation of the WLAD shall have a civil action in a court of competent jurisdiction.  The WLAD also states that with some exceptions, any unfair practice prohibited by the WLAD, which is committed in the course of trade or commerce, is a per se violation of the CPA.

 

Summary of Bill:

Definitions.

The following terms are defined: algorithmic discrimination, artificial intelligence, automated decision tool, consequential decision, deployer, developer, ethical artificial intelligence, impact assessment, sex, and significant update.

 

Impact Assessments
By January 1, 2025, and annually thereafter, both deployers with 50 or more employees who use automated decision tools and developers who design, code, or produce an automated decision tool must complete and document an impact assessment for any automated decision tool the deployer uses or the developer designs, codes, or produces.  Both deployers and developers must include the following in their impact assessments:

  • a statement of the automated decision tool's purpose and intended benefits, uses, and deployment contexts;
  • a description of the automated decision tool's outputs and how they are used to make, or be a controlling factor in making, a consequential decision;
  • a summary of the types of data collected from natural persons and processed by the automated decision tool when it is used to make, or be a controlling factor in making, a consequential decision; and
  • a description of how the automated decision tool will be used by a natural person, or monitored when it is used, to make, or be a controlling factor in making, a consequential decision.

 

Deployer's impact assessment must also include:

  • a statement of the extent to which the deployer's use of the automated decision tool is consistent with or varies from the developer's statement regarding its intended use;
  • an assessment of the reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool known to the deployer at the time of the impact assessment;
  • a description of the safeguards implemented, or that will be implemented, by the deployer to align use of the automated decision tool with principles of ethical artificial intelligence and to address any reasonably foreseeable risks of algorithmic discrimination arising from the use of the automated decision tool; and
  • a description of how the automated decision tool has been or will be evaluated for validity or relevance. 

 

Developer's impact assessment must also include:

  • an assessment of the reasonably foreseeable risks of algorithmic discrimination arising from the intended use or foreseeable misuse of the automated decision tool; and
  • a description of the measures taken by the developer to incorporate principles of ethical artificial intelligence and to mitigate the risk known to the developer of algorithmic discrimination arising from the use of the automated decision tool. 

 

If there is a significant update to an automated decision tool, the deployer or developer must perform an additional impact assessment as soon as feasible.  Upon the request of the Attorney General's Office (AGO), a deployer or developer must provide an impact assessment that it performed to the AGO.  Impact assessments are confidential and exempt from disclosure under the Public Records Act. 

 

Developer Statements and Policies.

A developer must provide a deployer with a statement regarding the intended uses of the automated decision tool and documentation regarding:

  1. The known limitations of the automated decision tool, including any reasonably foreseeable risks of algorithmic discrimination arising from its intended use.
  2. A description of the types of data used to program or train the automated decision tool.
  3. A description of how the automated decision tool was evaluated for validity and the ability to be explained before sale or licensing.

 

A developer must make publicly and readily available a clear policy that provides a summary of:  (1) the types of automated decision tools currently made available to others by the developer; and (2) how the developer manages the reasonably foreseeable risks of algorithmic discrimination that may arise from the use of the automated decision tools it currently makes available to others. 

 

Consumer Protection Act Violations.
Violations are considered unfair or deceptive acts in trade or commerce for the purposes of the Consumer Protection Act.  Only the AGO may bring an action in the name of the state to enforce any violations.  Before commencing an action under the CPA, the AGO must provide 45 days written notice to a deployer or developer of the alleged violation.  The deployer or developer has an opportunity to cure the alleged violation within 45 days of receiving notice. 

 

Violations of the Washington Law Against Discrimination.

A deployer is prohibited from using an automated decision tool that results in algorithmic discrimination.  A violation is an unfair practice under the WLAD. 

Appropriation: None.
Fiscal Note: Available.
Effective Date: The bill takes effect 90 days after adjournment of the session in which the bill is passed.