In recent years, artificial intelligence (AI) has become an integral tool across various professional sectors, including law, medicine, accounting, and insurance. Its ability to process vast amounts of data and generate human-like text has streamlined numerous tasks. However, a significant concern has emerged: AI’s tendency to produce “hallucinations,” or fabricated information presented as fact. It can also pull information from sources that are outdated, not credible and sometimes false. This phenomenon underscores a critical thesis: AI alone is not credible for professionals without affirmation that the source data it draws from is credible. I’d like to thank my friend Sara Merken over at Insurance Journal for her wonderful article recently that brought this issue to the forefront of our minds.
AI hallucinations occur when generative AI models produce information that appears plausible but is entirely fabricated. These models, including advanced chatbots and content generators, rely on patterns in the data they were trained on to generate responses. While they can produce coherent and contextually relevant content, they do not possess true understanding. Consequently, when prompted, they may inadvertently generate content that includes fictitious details, non-existent case laws, or inaccurate data.
In the insurance industry, particularly within workers’ compensation, the accuracy of information is paramount. AI tools are increasingly being suggested as ways to assess claims, evaluate premiums, and even predict risks. However, reliance on AI-generated data without proper verification can lead to significant errors. For instance, if an AI system “hallucinates” data about workplace injury statistics or misinterprets policy details, it could result in incorrect premium calculations or unjust claim denials. Such errors not only affect the financial health of insurance companies but also jeopardize the trust and well-being of policyholders.
The crux of the issue lies in the credibility of the data sources that AI systems utilize. AI models are only as reliable as the data they are trained on. Anyone that has googled something lately sees “AI responses” which are sometimes very accurate, but not always. Without access to credible, accurate, up-to-date, and comprehensive data, these systems are prone to generating misleading or false information. Therefore, professionals must ensure that any AI tool they employ is backed by credible data sources and human review for accuracy. This involves rigorous vetting of AI systems, continuous monitoring of their outputs, and cross-referencing with trusted information repositories. This must be an ongoing process as data changes and which sources keep up with it is a fluid process.
To mitigate the risks associated with AI hallucinations, professionals should adopt robust fact-checking protocols. According to some great insights from Dave Andre at All About AI, the following steps are essential.
Define Clear Requirements: Before utilizing AI-generated content, clearly outline the specific information needed. This helps in tailoring the AI’s output to relevant and precise data.
Verify Information from Multiple Sources: Cross-reference AI-generated content with multiple reputable sources to confirm its accuracy. Relying on a single source increases the risk of perpetuating errors.
Consult Subject Matter Experts: Engage with experts in the relevant field to review and validate the AI’s output. Their expertise can identify subtle inaccuracies that automated systems might overlook.
Utilize AI Tools for Cross-Verification: Employ AI tools designed to detect inconsistencies or contradictions within the content. For example, using AI models to cross-verify facts can help in identifying potential errors.
Regularly Update AI Systems: Ensure that AI tools are updated with the latest information and data sets. Outdated data can lead to incorrect conclusions and recommendations.
Now that you’ve taken steps to raise your AI IQ, you can use this new understanding to navigate insurance market changes, policies and procedures to boost your PEO’s AI protection.
Am I Insured for Loss Caused by AI?
When cyber risk first emerged, it was usually wrapped into Electronic Data Processing (EDP) policies that were more focused on property and data loss . Over time, as claims data grew and the severity of the claims brought grew with it, it became clear that cyber required dedicated underwriting models, policy language, and risk mitigation strategies. To first address this expanded need for protection against third-party liability suits, AIG created the first cyber policy in 1997 which was originally referred to as a “hacker policy” and became the foundational form for various carriers to create their own policy forms.
A similar inflection point, we feel is approaching for AI.
At present, it is assumed that AI-related exposures are to fall under existing policies – usually General Liability, Cyber, or Errors & Omissions/Professional Liability coverage. Is it? In the past, usually but now it is by no means a given and needs to be evaluated. As with cyber where the coverage began being limited or all out excluded from EDP policies, we have begun to see policy language like the following from a WR Berkley endorsement, referred to as Berkley’s Absolute AI Exclusion:
The Insurer shall not be liable to make payment under this Coverage Part for Loss on account of any Claim made against any Insured based upon, arising out of, or attributable to:
(1) any actual or alleged use, deployment, or development of Artificial Intelligence by any person or entity, including but not limited to:
(a) the generation, creation, or dissemination of any content or communications using Artificial Intelligence;
(b) any Insured’s actual or alleged failure to identify or detect content or communications created through a third party’s use of Artificial Intelligence;
(c) any Insured’s inadequate or deficient policies, practices, procedures, or training relating to Artificial Intelligence or failure to develop or implement any such policies, practices, procedures, or training;
(d) any Insured’s actual or alleged breach of any duty or legal obligation with respect to the creation, use, development, deployment, detection, identification, or containment of Artificial Intelligence;
(e) any product or service sold, distributed, performed, or utilized by an Insured incorporating Artificial Intelligence; or
(f) any alleged representations, warranties, promises, or agreements actually or allegedly made by a chatbot or virtual customer service agent;
(2) any Insured’s actual or alleged statements, disclosures, or representations concerning or relating to Artificial Intelligence
, including but not limited to:
(a) the use, deployment, development, or integration of Artificial Intelligence in the Company’s business or operations;
(b) any assessment or evaluation of threats, risks, or vulnerabilities to the Company’s business or operations arising from Artificial Intelligence, whether from customers, suppliers, competitors, regulators, or any other source; or
(c) the Company’s current or anticipated business plans, capabilities, or opportunities involving Artificial Intelligence;
(3) any actual or alleged violation of any federal, state, provincial, local, foreign, or international law, statute, regulations, or rule regulating the use or development of Artificial Intelligence or disclosures relating to Artificial Intelligence; or
(4) any demand, request, or order by any person or entity or any statutory or regulatory requirement that the Company investigate, study, assess, monitor, address, contain, or respond to the risks, effects, or impacts of Artificial Intelligence.
The potential breadth of this exclusion cannot be overstated. And, the exclusion’s title suggests that Berkley intends to apply the exclusion to virtually any claim with a connection to AI.
In insurance we provide general coverages and then take them away by way of policy language or endorsement. As an example, employment practices liability insurance (EPLI) also was a “throw-in coverage within the general liability form until that line also became very complex with volatile losses and needed specific underwriting models and policy forms to make it insurable to the insurance community. This will follow suit with AI as the growth of AI insurance is its own “new” line of insurance, with many insurers and syndicates now offering forms solely to address this unique exposure.
While AI offers transformative potential across various professional sectors, its outputs must be approached with caution. The phenomenon of AI hallucinations serves as a stark reminder that, without credible source data and rigorous verification processes, AI alone cannot be deemed reliable for critical professional applications. By implementing stringent fact-checking measures and ensuring the integrity of data sources, professionals can harness the benefits of AI while safeguarding against its pitfalls.
SHARE