S-0402.1

SENATE BILL 5356

State of Washington
68th Legislature
2023 Regular Session
BySenators Hasegawa, Hunt, Keiser, Lovelett, Saldaña, Stanford, Valdez, and J. Wilson
Read first time 01/13/23.Referred to Committee on Environment, Energy & Technology.
AN ACT Relating to establishing guidelines for government procurement and use of automated decision systems in order to protect consumers, improve transparency, and create more market predictability; adding a new section to chapter 49.60 RCW; adding a new chapter to Title 43 RCW; and declaring an emergency.
BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF WASHINGTON:
NEW SECTION.  Sec. 1. The legislature finds that:
(1) Washington is a technology leader on a national and global level and holds a distinctive position in creating frameworks around technology that enhance innovation while protecting consumers and promoting fairness, accountability, and transparency for all Washingtonians.
(2) Automated decision systems are rapidly being adopted to make or assist in core decisions in a variety of government and business functions, including criminal justice, health care, education, employment, public benefits, insurance, and commerce.
(3) These automated decision systems are currently unregulated, may be deployed without public notice, and vendors selling the systems may require restrictive contractual provisions that undermine government transparency and accountability.
(4) The average Washington resident is unlikely to understand processes used by these automated decision systems, yet these systems are increasingly used to make core government and business decisions impacting the civil rights and liberties of Washingtonians, raising significant concerns around due process, fairness, accountability, and transparency.
(5) A growing body of research shows that reliance on automated decision systems without adequate transparency, oversight, or safeguards can undermine market predictability, harm consumers, and deny historically disadvantaged or vulnerable groups the full measure of their civil rights and liberties.
(6) Research has shown that even the most innocent looking management tools often incorporate and compound the assumptions of institutional racism and other unfounded stereotypes. It is a matter of good governance to ensure that agencies consider whether the technologies they use improperly advantage or disadvantage Washington residents.
(7) In order to enhance innovation and ensure the use of these systems in ways that benefit Washington residents, the legislature intends to ensure the fair, transparent, and accountable use of automated decision systems.
NEW SECTION.  Sec. 2. The definitions in this section apply throughout this chapter unless the context clearly requires otherwise.
(1) "Agency" or "public agency" means any state executive office, agency, department, board, commission, committee, educational institution, or other state agency created by or pursuant to statute, other than courts and the legislature.
(2) "Algorithm" means a computerized procedure consisting of a set of steps to accomplish a determined task.
(3) "Algorithmic accountability report" means the report with content enumerated in section 5(4) of this act.
(4) "Algorithmic accountability review board" means the algorithmic accountability review board established under section 6 of this act.
(5)(a) "Automated decision system" means any algorithm, including one incorporating machine learning or other artificial intelligence techniques, that uses data-based analysis or calculations to make or support government decisions, judgments, or conclusions that cause a Washington resident or business to be treated differently than another Washington resident or business or results in statistically significant disparities with other classes of persons or businesses in the nature or amount of governmental interaction with that individual or business including, without limitation, benefits, protections, procurement processes, required payments, penalties, regulations, or timing, application, or process requirements.
(b) "Automated decision system" does not include tools that do not make or support governmental decisions, judgments, or conclusions that cause a Washington resident or business to be treated differently than another Washington resident or business in the nature or amount of government interaction with that individual or business including, without limitation, internal governmental computer server or electrical usage optimization, antivirus programs, and internal governmental space optimization programs.
(6) "Automated final decision system" means an automated decision system that makes final decisions, judgments, or conclusions without human intervention.
(7) "Automated support decision system" means an automated decision system that provides information to inform the final decision, judgment, or conclusion of a human decision maker.
(8) "Automation bias" means the tendency for humans to overestimate the accuracy of decision support and decision-making systems and ignore contradictory information made without automation.
(9) "Identified or identifiable natural persons" means a human being who can be readily identified, directly or indirectly.
(10) "Office" means the office of the state chief information officer established under RCW 43.105.205.
(11) "People" includes a natural person, corporation, limited liability company, limited liability partnership, partnership, or public or private organization or entity of any character, except where otherwise restricted.
(12) "Use" means to operate an automated decision system or to contract with a third party to operate an automated decision system to automate, aid, or replace any decision-making process that would otherwise be made by an agency.
NEW SECTION.  Sec. 3. By January 1, 2023, the office shall, in consultation with the office of equity:
(1) Adopt guidance for agencies regarding the development, procurement, and use of automated decision systems by a public agency. This guidance must incorporate the minimum standards and procedures set forth in sections 4 and 5 of this act with respect to automated decision systems. In adopting the guidance, the office must consult with representatives of communities whose rights are disproportionately impacted by automated decision systems as demonstrated by current studies; and
(2) Develop guidance for agencies to use when prioritizing analysis of automated decision systems. The guidance must include a prioritization framework or frameworks for identifying the order in which to examine existing and proposed automated decision systems. This prioritization framework may include criteria such as whether the system: Creates significant effects on identified or identifiable natural persons; affects many people; involves a high risk of error or bias; has been developed without transparency of the information used to develop the algorithm; or has not been independently tested for bias or inaccuracy. The prioritization framework must include identification of significantly high-risk systems according to the established criteria.
NEW SECTION.  Sec. 4. Subject to the staged review provisions of this chapter and the responsibility of agencies to establish priorities and timelines for compliance, the legislature finds that the following minimum standards should apply to a public agency's development, procurement, or use of an automated decision system:
(1) Agencies and the office, in consultation with the office of equity, should adopt interim and then long-term prioritization frameworks for allocating resources to address existing and future automated decision systems and to address any deficiencies found in compliance with this section. The prioritization frameworks should be used in determining the level of resources to be devoted first to examining existing and proposed systems and then to meeting the other requirements of this section.
(2) As a part of the procurement process, agencies should assess new automated decision systems procured by them. The assessment should include evaluation of the potential impacts of the automated decision system on: (a) The risk to rights and freedoms to an identified or identifiable natural person; (b) the existence or risk of bias or inaccuracy in the results of the system; and (c) whether the workings of the system are transparent to the public.
(3) Automated decision systems currently in use by the state that produce legal effects on identified or identifiable natural persons should be assessed according to the prioritization framework. The assessment should include the existence or risk of bias or inaccuracy in the results and how transparent the system use and impacts are to the public.
(4) Agencies should provide transparency of use, procurement, and development of automated decision systems, including monitoring or testing for accuracy and bias, that produce legal effects on identified or identifiable natural persons.
(5) Ongoing monitoring or auditing should be performed on automated decision systems that have legal effects on identified or identifiable natural persons to ensure they do not have differential effects on subpopulations that result over time; or discriminate against an individual, or treat an individual less favorably than another, in whole or in part, on the basis of one or more factors enumerated in RCW 49.60.010.
(6) Agencies should provide training of state employees who develop, procure, operate, or use automated decision systems as to the risk of automation bias.
(7) A public agency that develops, procures, or uses an automated decision system must follow any conditions set forth in the relevant algorithmic accountability report.
(8) Subject to the staged implementation as outlined in this chapter, a public agency must, at a minimum:
(a) Give clear notice in plain language to the people impacted by the automated decision system of the following:
(i) The fact that the system is in use;
(ii) The system's name, vendor, and version;
(iii) What decision or decisions it will be used to make or support;
(iv) Whether it is an automated final decision system or automated support decision system and whether and through what process a human verifies or confirms decisions made by the automated decision system;
(v) What policies and guidelines apply to its deployment; and
(vi) How people may contest any decision made involving the automated decision system as required pursuant to this section;
(b) Ensure that with respect to newly acquired automated decision systems and, to the maximum extent practicable with respect to existing automated decision systems, the system and the data used to develop the system are made freely available by the vendor before, during, and after deployment for agency or independent third-party testing, auditing, or research to understand its impacts, including potential bias, inaccuracy, or disparate impacts, provided that the vendor may specify that an independent third party examining proprietary trade secrets shall reveal only the outcome of the examination, and not the content of the trade secrets;
(c) Ensure that any decision made or informed by the automated decision system is subject to appeal, immediate suspension if a legal right, duty, or privilege is impacted by the decision, and potential reversal by a human decision maker through a timely process not to exceed 20 days, and clearly described and accessible to people impacted by the decision; and
(d) Ensure the agency can explain the basis for its decision to any impacted people in terms understandable to a layperson including, without limitation, by requiring the vendor to create such an explanation.
(9) A procurement contract for an automated decision system entered into by a public agency after the effective date of this section must ensure that the minimum standards set forth in this section are able to be effectuated without impairment, including requiring the vendor to waive any legal claims that may impair these minimum standards. Such a contract may not contain nondisclosure or other provisions that prohibit or impair these minimum standards.
NEW SECTION.  Sec. 5. The intent of this section is to structure the way in which public agencies examine their existing and proposed automated decision systems and to identify for the legislature, the governor, and the public instances in which such examination is either incomplete or reveals that the applicable automated decision system fails to meet the minimum requirements of section 4 of this act. Subject to such intent:
(1) Agencies already using an automated decision system as of the effective date of this section must provide a list of automated decision systems in use to the algorithmic accountability review board by January 1, 2024, and use the prioritization framework established under section 3 of this act or adopt and implement an interim prioritization framework to identify the order in which to complete an algorithmic accountability report on each existing automated decision system by January 1, 2026. For the purpose of this subsection:
(a) The algorithmic accountability report must, at minimum, include clear and understandable statements based on information already available to the agency.
(b) The algorithmic accountability report must accurately report only the actual direct knowledge contained in the files. For example, if the files contain a statement from the vendor that the system has been examined for bias but there is no report available for examination, the agency may not report that the system has been examined for bias and must instead report that the vendor states that the system has been examined for bias.
(c) Agencies may include information not already contained in their files. For example, a bias report conducted by a third party may be included.
(d) The list of systems and prioritization frameworks must be available to the public and may include criteria such as whether the system: Creates significant effects on identified or identifiable natural persons; affects many people; involves a high risk of error or bias; has been developed without transparency of the information used to develop the algorithm; or has not been independently tested for bias or inaccuracy.
(e) For systems that involve high risk pursuant to the prioritization framework, the algorithmic accountability report must include an evaluation of accuracy and bias by a qualified independent third party, and if such a report does not currently exist it must nevertheless be prepared and included to meet the timelines for submission of an algorithmic accountability report on such system.
(f) If an agency does not complete an algorithmic accountability report for each automated decision system already in use by January 1, 2026, then, unless the agency has been evaluating their systems in good faith based on the established prioritization framework and is granted an extension by the algorithmic accountability review board, the agency must cease use of the unevaluated automated decision system until such time as an extension is granted or the algorithmic accountability report is provided.
(g) Any request for extension of the deadline must include a timeline for when each algorithmic accountability report will be provided by the agency.
(h) The algorithmic accountability review board shall grant an extension for the continued use of a system if the agency has established a reasonable timeline for completion of the algorithmic accountability report and there is no apparent likelihood of bias regarding the system.
(i) The algorithmic accountability review board must report annually on agency compliance with this subsection and any extensions granted under this subsection. The report must be made available to the public.
(2) A public agency intending to newly develop or procure an automated decision system for use between the effective date of this section and January 1, 2026, must as a condition of use of such system, at least one month prior to procurement of, or if internally developed, implementation of such a system produce and file with the office an algorithmic accountability report for that system as described in subsection (4) of this section. In addition to using information already available to an agency, the agency shall conduct reasonable investigatory due diligence including, but not limited to, inquiring with a system provider as to whether studies have been conducted and requesting copies of any studies. For systems that involve high risk pursuant to the prioritization framework, the algorithmic accountability report must include an evaluation of accuracy and bias by a qualified independent third party.
(3) An agency intending to develop or procure an automated decision system for implementation after January 1, 2026, must, as a condition of use of such automated decision system, submit an algorithmic accountability report as described in subsection (4) of this section and obtain a finding by the algorithmic accountability review board pursuant to (d) of this subsection. In addition to using information already available to an agency, the agency shall conduct reasonable investigatory due diligence including, but not limited to, inquiring of a system provider if studies have been conducted and requesting copies of any studies. For systems that involve high risk pursuant to the prioritization framework, the algorithmic accountability report must include an evaluation of accuracy and bias by a qualified independent third party.
(a) The office must post the algorithmic accountability reports on their public website and invite public comment on the algorithmic accountability report for a period of no less than 30 days.
(b) The algorithmic accountability review board may adopt scoring criteria for determining whether the agency's algorithmic accountability report reasonably shows that the automated decision system meets the minimum standards set forth in section 4 of this act.
(c) After receiving public comment, the algorithmic accountability review board must review the algorithmic accountability report and comments received to determine whether the agency's algorithmic accountability report fails to reasonably show that the automated decision system meets the minimum standards set forth in section 4 of this act.
(d) On the basis of its review of an algorithmic accountability report, the algorithmic accountability review board shall find that the algorithmic accountability report: (i) Reasonably demonstrates that the system meets the minimum standards set forth in section 4 of this act; or (ii) fails, by stated fact or by omission, to show that the system meets the minimum standards set forth in section 4 of the act.
(e) The report of a failure to meet the minimum standards of section 4 of this act must provide a reasonably detailed description from the algorithmic accountability review board of the reasons for the finding and may, but is not required to be, accompanied by a statement by the algorithmic accountability review board of what further information, or changes, or both may be necessary to the content of the algorithmic accountability report or operation of the automated decision system that could result in a finding that the agency's algorithmic accountability report reasonably shows that the automated decision system meets the minimum standards of section 4 of this act.
(f) Following a finding that the agency's algorithmic accountability report fails to show that an automated decision system meets the minimum standards of section 4 of this act, the applicable agency shall be entitled to revise the information provided, the system, or the procedures for use of the system and to submit a revised algorithmic accountability report to the algorithmic accountability review board for review.
(g) All findings and reports of the algorithmic accountability review board regarding whether a system meets the minimum requirements of section 4 of this act shall be posted on the office's website, and a copy of any reports finding a failure to meet the minimum requirements of section 4 of this act shall be independently transmitted to the legislature and the governor.
(4) Each algorithmic accountability report must include clear and understandable statements of the following:
(a) The automated decision system's name, vendor, and version;
(b) A description of the automated decision system's general capabilities, including reasonably foreseeable capabilities outside the scope of the agency's proposed use and whether the automated decision system is used or may be used to deploy or trigger any weapon;
(c) A description of the purpose and proposed use of the automated decision system, including:
(i) What decision or decisions the system will be used to make or support;
(ii) Whether it is an automated final decision system or automated support decision system; and
(iii) Its intended benefits, including any data or research demonstrating those benefits and whether and where such data or research may be viewed by the public;
(d)(i) The type or types of data inputs that the technology uses; (ii) how that data is generated, collected, and processed; and (iii) the type or types of data the system is reasonably likely to generate;
(e) Whether there was an examination of potential inaccuracies or bias, or both created during the automated decision system's development, design, or implementation as a result of the nature of the data used to inform the system or the system design. If such an examination was performed, a description of the individual or entity who performed the examination, the nature of the examination with sufficient specificity to allow evaluation of its validity, and the results including any steps taken to address the potential inaccuracies or bias, or both must also be included in the report;
(f) Whether implementation of the system has produced known erroneous results. If erroneous results were produced, a description of those errors, including the results of any audits conducted to check for erroneous results, together with any steps taken to address the reasons for the erroneous results must also be included in the report;
(g) Whether and how people affected by a system decision can review and challenge the basis for that system decision, and a description of the results of any such challenges;
(h) A description of any public or community engagement held, whether people and communities affected by the system were consulted, what actions were taken in response to public and community input, and any future public or community engagement plans in connection with the design or use of the automated decision system;
(i) Whether the decision algorithm is available for examination by the agency or the public, or both, and to what extent;
(j) A description of how the agency plans to comply with each requirement set forth in section 4 of this act;
(k) Whether the automated decision system makes decisions affecting the constitutional or legal rights, duties, or privileges of any Washington resident;
(l) Whether the system's decisions intentionally differentially affect members of protected classes, such as by selecting persons with disabilities for certain benefits;
(m) Whether any of the decision criteria are mandated by statute and, if so, which criteria and by what statutes;
(n) Whether there exists a clear use and data management policy, including specific protocols for the following:
(i) How and when the automated decision system will be deployed or used and by whom including, but not limited to: The factors that will be used to determine where, when, and how the technology is deployed; and other relevant information, such as whether the technology will be operated continuously or used only under specific circumstances. If the automated decision system will be operated or used by another entity on the agency's behalf, the algorithmic accountability report must explicitly include a description of the other entity's access and any applicable protocols;
(ii) Any additional rules that will govern use of the automated decision system and what processes will be required prior to each use of the automated decision system;
(iii) How automated decision system data will be securely stored and accessed, and whether an agency intends to share access to the automated decision system or the data from that automated decision system with any other entity, and why; and
(iv) How the agency will ensure that all personnel who operate the automated decision system or access its data are properly trained and able to ensure compliance with the use and data management policy prior to the use of the automated decision system; and
(o) A description of the fiscal impact of the automated decision system, including:
(i) Initial acquisition costs;
(ii) A reasonable estimate of ongoing operating costs such as maintenance, licensing, personnel, legal compliance, use auditing, data retention, and security costs;
(iii) A reasonable estimate of cost savings that would be achieved through the use of the technology; and
(iv) Any current or currently identified potential sources of funding, including any subsidies, incentives, or free products being offered by vendors or governmental entities.
NEW SECTION.  Sec. 6. (1) The algorithmic accountability review board is created within the office.
(2) The board shall represent diverse stakeholders and consist of the following voting members:
(a) The director of the office who shall serve as chair of the board;
(b) Six members appointed by the governor, two of whom shall be representatives of state agencies or institutions; two of whom shall be representatives of consumer protection organizations; and two of whom shall be representatives of civil rights organizations or advocacy organizations that represent individuals or protected classes of historically marginalized communities including, but not limited to, African American, Hispanic American, Native American, and Asian American communities, religious minorities, and protest and activist groups. Of the state agency representatives, at least one of the representatives must have direct experience using automated decision systems overseen by the board;
(c) Two members shall represent the house of representatives and shall be selected by the speaker of the house of representatives with one representative chosen from each major caucus of the house of representatives;
(d) Two members shall represent the senate and shall be appointed by the president of the senate with one representative chosen from each major caucus of the senate.
(3) Of the initial members appointed by the governor, three must be appointed for a one-year term, and two must be appointed for a two-year term. Thereafter members must be appointed for three-year terms.
(4) Initial appointments to the board must be made by January 1, 2023.
(5) Vacancies shall be filled in the same manner that the original appointments were made for the remainder of the member's term.
(6) Members of the board shall be reimbursed for travel expenses as provided in RCW 43.03.050 and 43.03.060.
(7) The office shall provide staff support to the board.
NEW SECTION.  Sec. 7. (1) Beginning December 1, 2023, and updated not less than quarterly, the office shall make publicly available on its website an inventory of all algorithmic accountability reports on automated decision systems that have been proposed for or are being used, developed, or procured by public agencies.
(2) Beginning January 1, 2024, the office shall make publicly available on its website metrics on all approvals, conditional approvals, or denials of agency algorithmic accountability reports to develop or procure automated decision systems for use by agencies, including written explanations of each decision.
(3) For automated decision systems implemented prior to January 1, 2026:
(a) The algorithmic accountability review board shall conduct selective audits of the applicable algorithmic accountability reports and shall make appropriate findings with regard to whether the agency's algorithmic accountability report reasonably shows that the automated decision system audited meets the minimum standards of section 4 of this act. The selective audits conducted must also contain the elements described in subsection (6) of this section. In selecting which systems to audit, the algorithmic accountability review board may take into account:
(i) The number of persons affected by the automated decision system, including systems in use by multiple jurisdictions;
(ii) The apparent likelihood that the system creates unintended, erroneous, or discriminatory results;
(iii) The severity of the effects of an unintended, erroneous, or discriminatory decision on the affected people; and
(iv) Other criteria as the algorithmic accountability review board deems appropriate to a selective audit.
(b) The office shall establish guidelines by January 1, 2024, for the number or percentage of algorithmic accountability reports to be audited by the algorithmic accountability review board pursuant to (a) of this subsection.
(4)(a) Beginning January 1, 2026, the algorithmic accountability review board shall conduct an annual review of agency audits and compile the information into a report that includes the following:
(i) Whether each agency that uses, develops, or procures an automated decision system has complied with the terms of its approved algorithmic accountability report;
(ii) Descriptions of any known or reasonably suspected violations of any algorithmic accountability report policies;
(iii) Any systematic issues, such as bias and disproportionate impacts on marginalized or vulnerable communities, raised by use of automated decision systems; and
(iv) Recommendations, if any, relating to revisions to this chapter or to specific algorithmic accountability reports.
(b) The first annual report on agency audits must be made publicly available on the office's website by March 1, 2025, and annually thereafter on or before March 1st.
(5) Beginning January 1, 2025, each agency using an automated decision system must publish on its website annual metrics regarding the number of requests for human review of a decision rendered by the automated decision system it received and the outcome of the human review.
(6) Beginning January 1, 2026, agencies shall conduct an annual audit on automated decision systems that have legal effects on people to ensure that they do not have differential effects on subpopulations that result over time and report to the algorithmic accountability review board any findings. The report must include, at minimum:
(a) Whether the automated decision system has complied with the terms of its approved algorithmic accountability report;
(b) Descriptions of any known or reasonably suspected violations of any report policies;
(c) Any systematic issues, such as bias and disproportionate impacts on marginalized or vulnerable communities, raised by use of automated decision systems; and
(d) Recommendations, if any, relating to revisions to the automated decision system algorithmic accountability report.
NEW SECTION.  Sec. 8. Any person who is injured by a material violation of this chapter may institute proceedings against the public agency deploying the automated decision system in a court of competent jurisdiction for injunctive or declaratory relief, or both, to compel compliance with this chapter and all relief available in law or equity with respect to section 9 of this act, and in either event if successful shall be entitled to recover their reasonable attorneys' fees and costs.
NEW SECTION.  Sec. 9. A new section is added to chapter 49.60 RCW to read as follows:
Except to the extent an automated decision system utilizes a criterion specifically mandated by state or federal law or regulation, it is an unfair practice under this section for any automated decision system to discriminate against an individual, or to treat an individual less favorably than another, in whole or in part, on the basis of one or more factors enumerated in RCW 49.60.010. For the purposes of this section, "automated decision system" has the same meaning as defined in section 2 of this act.
NEW SECTION.  Sec. 10. Sections 1 through 8 of this act constitute a new chapter in Title 43 RCW.
NEW SECTION.  Sec. 11. This act is necessary for the immediate preservation of the public peace, health, or safety, or support of the state government and its existing public institutions, and takes effect immediately.
--- END ---