WSR 24-10-025
OFFICE OF THE
INSURANCE COMMISSIONER
[Filed April 22, 2024, 1:36 p.m.]
Technical Assistance Advisory 2024-021
1 | This advisory is a policy statement released to advise the public of the Washington state office of the insurance commissioner's (OIC) current opinions, approaches, and likely courses of action. It is advisory only. RCW 34.05.230(1). |
TO: All Insurers Licensed to Do Business in Washington (Insurers)
FROM: Washington State Office of the Insurance Commissioner
DATE: April 22, 2024
RE: The Use of Artificial Intelligence Systems in Insurance2
2 | This advisory is based upon a model bulletin adopted by the National Association of Insurance Commissioners (NAIC) https://content.naic.org/sites/default/files/inline-files/2023-12-4%20Model%20Bulletin_Adopted_0.pdf. |
This bulletin is issued by OIC to remind all insurers that hold certificates of authority to do business in the state that decisions or actions impacting consumers that are made or supported by advanced analytical and computational technologies, including artificial intelligence (AI) systems (as defined below), must comply with all applicable insurance laws and regulations. This includes those laws that address unfair trade practices and unfair discrimination. This bulletin sets forth what OIC considers best practices as to how insurers will govern the development/acquisition and use of certain AI technologies, including the AI systems described herein. This bulletin also advises insurers of the type of information and documentation that OIC may request during an investigation or examination of any insurer regarding its use of such technologies and AI systems.
SECTION 1: INTRODUCTION, BACKGROUND, AND LEGISLATIVE AUTHORITY
Background: AI is transforming the insurance industry. AI techniques are deployed across all stages of the insurance life cycle, including product development, marketing, sales and distribution, underwriting and pricing, policy servicing, claim management, and fraud detection.
AI may facilitate the development of innovative products, improve consumer interface and service, simplify and automate processes, and promote efficiency and accuracy. However, AI, including AI systems, can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability. Insurers should take actions to minimize these risks.
OIC encourages the development and use of innovation and AI systems that contribute to safe and stable insurance markets. However, OIC expects that decisions made and actions taken by insurers using AI systems will comply with all applicable federal and state laws and regulations. OIC recognizes the principles of artificial intelligence that NAIC adopted in 2020 as an appropriate source of guidance for insurers as they develop and use AI systems. Those principles emphasize the importance of the fairness and ethical use of AI; accountability; compliance with state laws and regulations; transparency; and a safe, secure, fair, and robust system. These fundamental principles should guide insurers in their development and use of AI systems and underlie the advice set forth in this bulletin.
Legislative Authority:3
3 | Additional requirements of insurers exist in Titles 48 RCW and 284 WAC. The cited laws and regulations are a general, nonexhaustive list. |
The regulatory advice and oversight considerations set forth in Sections 3 and 4 of this bulletin rely on the following laws and regulations:
Unfair Trade Practices: The Unfair Trade Practices Act, chapter
48.30 RCW (UTPA), regulates trade practices in insurance by: (1) Defining practices that constitute unfair methods of competition or unfair or deceptive acts and practices; and (2) prohibiting the trade practices so defined or determined.
Unfair Claims Settlement Practices: The Unfair Claims Settlement Practices Act, WAC 284-30-300 through 284-30-390 (UCSPA), sets forth standards for the investigation and disposition of claims arising under policies or certificates of insurance issued to residents of Washington.
Unfair Discrimination: RCW
48.18.480 prohibits unfair discrimination between insureds having substantially like insuring, risk, and exposure factors, and expense elements, in the terms or conditions of any insurance contract, or in the rate or amount of premium charged therefor, or in the benefits payable or in any other rights or privileges accruing thereunder.
Actions taken by insurers in the state must not violate UTPA or UCSPA, regardless of the methods the insurer used to determine or support its actions. As discussed below, insurers are encouraged to adopt practices, including governance frameworks and risk management protocols, that are designed to ensure that the use of AI systems does not result in: (1) Unfair trade practices, as defined in chapters 284-30 WAC and
48.30 RCW; or (2) unfair claims settlement practices, as defined in WAC 284-30-300 through 284-30-390.
Corporate Governance: The Corporate Governance Annual Disclosure Act, chapter
48.195 RCW (CGAD), requires insurers to report on governance practices and to provide a summary of the insurer's corporate governance structure, policies, and practices. The content, form, and filing requirements for CGAD information are set forth in the Corporate Governance Annual Disclosure Model Regulation, as adopted in WAC 284-07-700 through 284-07-740 (CGAD-R).
The requirements of CGAD and CGAD-R apply to elements of the insurer's corporate governance framework that address the insurer's use of AI systems to support actions and decisions that impact consumers.
Insurance Rating: The property and casualty model rating law, RCW
48.19.020, requires that property/casualty (P/C) insurance rates not be excessive, inadequate, or unfairly discriminatory.
The requirements of RCW
48.19.020 apply regardless of the methodology that the insurer used to develop rates, rating rules, and rating plans subject to those provisions. That means that an insurer is responsible for assuring that rates, rating rules, and rating plans that are developed using AI techniques and predictive models that rely on data and machine learning do not result in excessive, inadequate, or unfairly discriminatory insurance rates with respect to all forms of casualty insurance
—including fidelity, surety, and guaranty bond
—and to all forms of property insurance
—including fire, marine, and inland marine insurance, and any combination of any of the foregoing.
Market Conduct Surveillance: The market conduct surveillance model law, chapter
48.37 RCW, establishes the framework pursuant to which OIC conducts market conduct actions. These are comprised of the full range of activities that OIC may initiate to assess and address the market practices of insurers, beginning with market analysis and extending to targeted examinations. Market conduct actions are separate from, but may result from, individual complaints made by consumers asserting illegal practices by insurers.
An insurer's conduct in the state, including its use of AI systems to make or support actions and decisions that impact consumers, is subject to investigation, including market conduct actions. Section 4 of this bulletin provides guidance on the kinds of information and documents that OIC may request in the context of an AI-focused investigation, including a market conduct action.
SECTION 2: DEFINITIONS: For the purposes of this bulletin the following terms are defined:
"Adverse Consumer Outcome" refers to a decision by an insurer that is subject to insurance regulatory standards enforced by OIC that adversely impacts the consumer in a manner that violates those standards.
"Algorithm" means a clearly specified mathematical process for computation; a set of rules that, if followed, will give a prescribed result.
"AI System" is a machine-based system that can, for a given set of objectives, generate outputs such as predictions, recommendations, content (such as text, images, videos, or sounds), or other output influencing decisions made in real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
"Artificial Intelligence (AI)" refers to a branch of computer science that uses data processing systems that perform functions normally associated with human intelligence, such as reasoning, learning, and self-improvement, or the capability of a device to perform functions that are normally associated with human intelligence such as reasoning, learning, and self-improvement. This definition considers machine learning to be a subset of AI.
"Degree of Potential Harm to Consumers" refers to the severity of adverse economic impact that a consumer might experience as a result of an adverse consumer outcome.
"Generative Artificial Intelligence (Generative AI)" refers to a class of AI systems that generate content in the form of data, text, images, sounds, or video, that is similar to, but not a direct copy of, preexisting data or content.
"Machine Learning (ML)" refers to a field within artificial intelligence that focuses on the ability of computers to learn from provided data without being explicitly programmed.
"Model Drift" refers to the decay of a model's performance over time arising from underlying changes such as the definitions, distributions, and/or statistical properties between the data used to train the model and the data on which it is deployed.
"Predictive Model" refers to the mining of historic data using algorithms and/or machine learning to identify patterns and predict outcomes that can be used to make or support the making of decisions.
"Third Party," for purposes of this bulletin, means an organization other than the insurer that provides services, data, or other resources related to AI.
SECTION 3: REGULATORY GUIDANCE: Decisions subject to regulatory oversight that are made by insurers using AI systems must comply with the legal and regulatory standards that apply to those decisions, including unfair trade practice laws. These standards require, at a minimum, that decisions made by insurers are not inaccurate, arbitrary, capricious, or unfairly discriminatory. Compliance with these standards is required regardless of the tools and methods insurers use to make such decisions. However, because, in the absence of proper controls, AI has the potential to increase the risk of inaccurate, arbitrary, capricious, or unfairly discriminatory outcomes for consumers, it is important that insurers adopt and implement controls specifically related to their use of AI that are designed to mitigate the risk of adverse consumer outcomes.
Consistent therewith, all insurers authorized to do business in this state are encouraged to develop, implement, and maintain a written program (an "AIS Program") for the responsible use of AI systems that make or support decisions related to regulated insurance practices. The AIS program should be designed to mitigate the risk of adverse consumer outcomes including, at a minimum, the statutory provisions set forth in Section 1 of this bulletin.
OIC recognizes that robust governance, risk management controls, and internal audit functions play a core role in mitigating the risk that decisions driven by AI systems will violate unfair trade practice laws and other applicable existing legal standards. OIC also encourages the development and use of verification and testing methods to identify errors and bias in predictive models and AI systems, as well as the potential for unfair discrimination in the decisions and outcomes resulting from the use of predictive models and AI systems.
The controls and processes that an insurer adopts and implements as part of its AIS program should be reflective of, and commensurate with, the insurer's own assessment of the degree and nature of risk posed to consumers by the AI systems that it uses, considering: (i) The nature of the decisions being made, informed, or supported using the AI system; (ii) the type and degree of potential harm to consumers resulting from the use of AI systems; (iii) the extent to which humans are involved in the final decision-making process; (iv) the transparency and explainability of outcomes to the impacted consumer; and (v) the extent and scope of the insurer's use or reliance on data, predictive models, and AI systems from third parties. Similarly, controls and processes should be commensurate with both the risk of adverse consumer outcomes and the degree of potential harm to consumers.
As discussed in Section 4, the decisions made as a result of an insurer's use of AI systems are subject to OIC's examination to determine that the reliance on AI systems are compliant with all applicable existing legal standards governing the conduct of the insurer.
AIS Program Guidelines:
1.0 General Guidelines
1.1 The AIS program should be designed to mitigate the risk that the insurer's use of an AI system will result in adverse consumer outcomes.
1.2 The AIS program should address governance, risk management controls, and internal audit functions.
1.3 The AIS program should vest responsibility for the development, implementation, monitoring, and oversight of the AIS program and for setting the insurer's strategy for AI systems with senior management accountable to the board or an appropriate committee of the board.
1.4 The AIS program should be tailored to and proportionate with the insurer's use and reliance on AI and AI systems. Controls and procedures should be focused on the mitigation of adverse consumer outcomes and the scope of the controls and procedures applicable to a given AI system use case should reflect and align with the degree of potential harm to consumers with respect to that use case.
1.5 The AIS program may be independent of or part of the insurer's existing enterprise risk management (ERM) program. The AIS program may adopt, incorporate, or rely upon, in whole or in part, a framework or standards developed by an official third-party standard organization, such as the National Institute of Standards and Technology (NIST) Artificial Intelligence Risk Management Framework, Version 1.0.
1.6 The AIS program should address the use of AI systems across the insurance life cycle, including areas such as product development and design, marketing, use, underwriting, rating and pricing, case management, claim administration and payment, and fraud detection.
1.7 The AIS program should address all phases of an AI system's life cycle, including design, development, validation, implementation (both systems and business), use, ongoing monitoring, updating, and retirement.
1.8 The AIS program should address the AI systems used with respect to regulated insurance practices whether developed by the insurer or a third-party vendor.
1.9 The AIS program should include processes and procedures providing notice to impacted consumers that AI systems are in use and provide access to appropriate levels of information based on the phase of the insurance life cycle in which the AI systems are being used.
2.0 Governance
The AIS program should include a governance framework for the oversight of AI systems used by the insurer. Governance should prioritize transparency, fairness, and accountability in the design and implementation of the AI systems, recognizing that proprietary and trade secret information must be protected. An insurer may consider adopting new internal governance structures or rely on the insurer's existing governance structures; however, in developing its governance framework, the insurer should consider addressing the following items:
2.1 The policies, processes, and procedures, including risk management and internal controls, to be followed at each stage of an AI system life cycle, from proposed development to retirement.
2.2 The requirements adopted by the insurer to document compliance with the AIS program policies, processes, procedures, and standards. Documentation requirements should be developed with Section 4 in mind.
2.3 The insurer's internal AI system governance accountability structure, such as:
(a) The formation of centralized, federated, or otherwise constituted committees comprised of representatives from appropriate disciplines and units within the insurer, such as business units, product specialists, actuarial, data science and analytics, underwriting, claims, compliance, and legal.
(b) Scope of responsibility and authority, chains of command, and decisional hierarchies.
(c) The independence of decision-makers and lines of defense at successive stages of the AI system life cycle.
(d) Monitoring, auditing, escalation, and reporting protocols and requirements.
(e) Development and implementation of ongoing training and supervision of personnel.
2.4 Specifically with respect to predictive models: The insurer's processes and procedures for designing, developing, verifying, deploying, using, updating, and monitoring predictive models, including a description of methods used to detect and address errors, performance issues, outliers, or unfair discrimination in the insurance practices resulting from the use of the predictive model.
3.0 Risk Management and Internal Controls
The AIS program should document the insurer's risk identification, mitigation, and management framework and internal controls for AI systems generally and at each stage of the AI system life cycle. Risk management and internal controls should address the following items:
3.1 The oversight and approval process for the development, adoption, or acquisition of AI systems, as well as the identification of constraints and controls on automation and design to align and balance function with risk.
3.2 Data practices and accountability procedures, including data currency, lineage, quality, integrity, bias analysis and minimization, and suitability.
3.3 Management and oversight of predictive models (including algorithms used therein), including:
(a) Inventories and descriptions of the predictive models.
(b) Detailed documentation of the development and use of the predictive models.
(c) Assessments such as interpretability, repeatability, robustness, regular tuning, reproducibility, traceability, model drift, and the auditability of these measurements where appropriate.
3.4 Validating, testing, and retesting as necessary to assess the generalization of AI system outputs upon implementation, including the suitability of the data used to develop, train, validate, and audit the model. Validation can take the form of comparing model performance on unseen data available at the time of model development to the performance observed on data post-implementation, measuring performance against expert review, or other methods.
3.5 The protection of nonpublic information, particularly consumer information, including unauthorized access to the predictive models themselves.
3.6 Data and record retention.
3.7 Specifically with respect to predictive models: A narrative description of the model's intended goals and objectives and how the model is developed and validated to ensure that the AI systems that rely on such models correctly and efficiently predict or implement those goals and objectives.
4.0 Third-Party AI Systems and Data
Each AIS program should address the insurer's process for acquiring, using, or relying on (i) third-party data to develop AI systems; and (ii) AI systems developed by a third party, which may include, as appropriate, the establishment of standards, policies, procedures, and protocols relating to the following considerations:
4.1 Due diligence and the methods employed by the insurer to assess the third party and its data or AI systems acquired from the third party to ensure that decisions made or supported from such AI systems that could lead to adverse consumer outcomes will meet the legal standards imposed on the insurer itself.
4.2 Where appropriate and available, the inclusion of terms in contracts with third parties that:
(a) Provide audit rights and/or entitle the insurer to receive audit reports by qualified auditing entities.
(b) Require the third party to cooperate with the insurer with regard to regulatory inquiries and investigations related to the insurer's use of the third-party's product or services.
4.3 The performance of contractual rights regarding audits and/or other activities to confirm the third-party's compliance with contractual and, where applicable, regulatory requirements.
SECTION 4: REGULATORY OVERSIGHT AND EXAMINATION CONSIDERATIONS: OIC's regulatory oversight of insurers includes oversight of an insurer's conduct in the state, including its use of AI systems to make or support decisions that impact consumers. Regardless of the existence or scope of a written AIS program, in the context of an investigation or market conduct action, an insurer can anticipate to be asked about its development, deployment, and use of AI systems, or any specific predictive model, AI system, or application and its outcomes (including adverse consumer outcomes) from the use of those AI systems, as well as any other information or documentation deemed relevant by OIC.
Insurers should anticipate those inquiries to include (but not be limited to) the insurer's governance framework, risk management, and internal controls (including the considerations identified in Section 3). In addition to conducting a review of any of the items listed in this bulletin, a regulator may also ask questions regarding any specific model, AI system, or its application, including requests for the following types of information and/or documentation:
1. Information and Documentation Relating to AI System Governance, Risk Management, and Use Protocols:
1.1. Information and documentation related to or evidencing the insurer's AIS program, including:
(a) The written AIS program.
(b) Information and documentation relating to or evidencing the adoption of the AIS program.
(c) The scope of the insurer's AIS program, including any AI systems and technologies not included in or addressed by the AIS program.
(d) How the AIS program is tailored to and proportionate with the insurer's use and reliance on AI systems, the risk of adverse consumer outcomes, and the degree of potential harm to consumers.
(e) The policies, procedures, guidance, training materials, and other information relating to the adoption, implementation, maintenance, monitoring, and oversight of the insurer's AIS program, including:
i. Processes and procedures for the development, adoption, or acquisition of AI systems, such as:
(1) Identification of constraints and controls on automation and design.
(2) Data governance and controls, any practices related to data lineage, quality, integrity, bias analysis and minimization, suitability, and data currency.
ii. Processes and procedures related to the management and oversight of predictive models, including measurements, standards, or thresholds adopted or used by the insurer in the development, validation, and oversight of models and AI systems.
iii. Protection of nonpublic information, particularly consumer information, including unauthorized access to predictive models themselves.
1.2. Information and documentation relating to the insurer's preacquisition/preuse diligence, monitoring, oversight, and auditing of data or AI systems developed by a third party.
1.3. Information and documentation relating to or evidencing the insurer's implementation and compliance with its AIS program, including documents relating to the insurer's monitoring and audit activities respecting compliance, such as:
(a) Documentation relating to or evidencing the formation and ongoing operation of the insurer's coordinating bodies for the development, use, and oversight of AI systems.
(b) Documentation related to data practices and accountability procedures, including data lineage, quality, integrity, bias analysis and minimization, suitability, and data currency.
(c) Management and oversight of predictive models and AI systems, including:
i. The insurer's inventories and descriptions of predictive models, and AI systems used by the insurer to make or support decisions that can result in adverse consumer outcomes.
ii. As to any specific predictive model or AI system that is the subject of investigation or examination:
(1) Documentation of compliance with all applicable AI program policies, protocols, and procedures in the development, use, and oversight of predictive models and AI systems deployed by the insurer.
(2) Information about data used in the development and oversight of the specific model or AI system, including the data source, provenance, data lineage, quality, integrity, bias analysis and minimization, suitability, and data currency.
(3) Information related to the techniques, measurements, thresholds, and similar controls used by the insurer.
(d) Documentation related to validation, testing, and auditing, including evaluation of model drift to assess the reliability of outputs that influence the decisions made based on predictive models. Note that the nature of validation, testing, and auditing should be reflective of the underlying components of the AI system, whether based on predictive models or generative AI.
2. Third-Party AI Systems and Data: In addition, if the investigation or examination concerns data, predictive models, or AI systems collected or developed in whole or in part by third parties, the insurer should also anticipate OIC to request the following additional types of information and documentation.
2.1 Due diligence conducted on third parties and their data, models, or AI systems.
2.2 Contracts with third-party AI system, model, or data vendors, including terms relating to representations, warranties, data security and privacy, data sourcing, intellectual property rights, confidentiality and disclosures, and/or cooperation with regulators.
2.3 Audits and/or confirmation processes performed regarding third-party compliance with contractual and, where applicable, regulatory obligations.
2.4 Documentation pertaining to validation, testing, and auditing, including evaluation of model drift.
OIC recognizes that insurers may demonstrate their compliance with the laws that regulate their conduct in the state in their use of AI systems through alternative means, including through practices that differ from those described in this bulletin. The goal of the bulletin is not to prescribe specific practices or to prescribe specific documentation requirements. Rather, the goal is to ensure that insurers in the state are aware of OIC's advice as to how AI systems will be governed and managed and of the kinds of information and documents about an insurer's AI systems that OIC expects an insurer to produce when requested.
As in all cases, investigations and market conduct actions may be performed using procedures that vary in nature, extent, and timing in accordance with regulatory judgment. Work performed may include inquiry, examination of company documentation, or any of the continuum of market actions described in the NAIC's Market Regulation Handbook. These activities may involve the use of contracted specialists with relevant subject matter expertise. Nothing in this bulletin limits the authority of OIC to conduct any regulatory investigation, examination, or enforcement action relative to any act or omission of any insurer that OIC is authorized to perform.
Questions about this bulletin may be directed to Bryon Welch, policy@oic.wa.gov or 360-725-7171.