AI AND TORT LIABILITY

BY Paul Hughes

Principal
Libertate Insurance, LLC

November 2024

 

Artificial Intelligence (AI) has permeated our personal and professional lives in recent times in a big way with broad application. Some of the functionality of AI is easy to understand, in no way controversial and helpful in our daily lives. For example, if you bought this on Amazon, you probably may want to buy that as well.  If your team is up by a certain score at a certain time, the chances to win are understood on a percentage basis based on all the outcomes of the past in that sport in similar situations. Both useful and innocuous applications of AI have advanced if you understand that for it to work, you may compromise your own behavioral data to use some applications.  Algorithms need to understand ‘you’ against historical patterns to predict against the past and that can be intrusive to some.

A tort (the French word for wrong) is an act that involves a breach of a civil duty owed to someone else other than a breach of contract. Torts include all negligence cases as well as intentional wrongs which result in some form of harm. Tort law defines what is a legal injury and therefore, whether a person, an entity or its agents may be held legally liable for an injury they have caused, whether accidental or deliberate.

There are applications such as driving a car that are more intensely scrutinized due to the potential physical harm caused by AI and the ensuing legal liability that results.  The National Transportation Safety Board (NTSB) recently received a letter from six senators imploring more investigation into the carmakers that have adopted AI technology. In rapid succession, the National Highway Traffic Safety Administration (NHTSA) has opened investigations into almost all the major companies testing autonomous vehicles as well as those that offer advanced driver-assist systems in their production cars.  Tesla, Ford, Waymo, Cruise, and Zoox are all being probed for alleged safety lapses, with the agency examining hundreds of crashes, some of which have been fatal.

In general, AI comes with inherent risks from an insurance perspective.  According to the National Association of Insurance Commissioners (NAIC) in their model bulletin regarding the use of AI for the insurance industry: “AI may facilitate the development of innovative products, improve consumer interface and service, simplify and automate processes, and promote efficiency and accuracy. However, AI, including AI Systems, can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability. Insurers should take actions to minimize these risks.”

As of this month, seventeen state insurance departments have adopted the NAIC model bulletin and four have passed laws restricting the use of AI. As an example, New York has restricted the use of any Automated Employment Decision Tool (AEDT) unless it has been tested for bias in the hiring process.

What other risks could AI create for your PEO? Here are a few areas you should pay attention to:

  1. Operational shutdowns resulting from AI-system malfunctions could trigger business interruption claims (property insurance).
  2. Professionals could face claims for (1) erroneous advice or misinterpretations; and (2) unexplainable decisions (hallucinations) delivered by AI-driven research or other tools that have negatively impacted end users (professional and/or product liability).
  3. Manufacturers of AI-enhanced products could be subject to property damage and/or bodily injury claims resulting from AI failures or malfunction, or where product liability regulations have been violated (professional and/or product liability).
  4. Corporate leaders could face accusations of failing to oversee/mitigate the risks associated with implementation of AI-driven processes that have led to financial losses or reputational damage (D&O).
  5. AI-driven hiring practices that inadvertently introduce bias could trigger discrimination lawsuits and claims against employers for unfair employment practices (EPLI).
  6. Copyright violations and/or patent infringement as a result of leveraged training data or from the AI model itself could result in claims in liability coverages (general liability/patent infringement).
  7. Increased use of AI in healthcare diagnostics could change insurance demand, but also give rise to potential gaps in coverage (medical malpractice).
  8. Insurers could face claims increases due to erroneous advice or misinterpretations delivered by AI-driven underwriting tools. AI-driven underwriting models that inadvertently introduce bias could trigger discrimination lawsuits and claims against insurance companies (EPLI).
  9. AI-created instructions (“Deep Fake”) requesting funds to be wired (crime).

Are you covered for all this? Probably for now, but I’d check with your broker and review policies if you are deploying AI in a manner that could bring potential liability to the business. Most PEOs have non-admitted liability policies that are manuscript in nature and are not standardized. In layman’s terms, each policy is written different and can either include the exposure to legal liability as a “named peril,” exclude the exposure or be silent to it.  We see most insurers at present taking the silent approach, but it is my opinion that this will change quickly as AI cases are brought along with ensuing covered losses. The increase in usage of AI products will only increase the legal liability that they can create, which leads to another opportunity.

As noted in a recent article as a part of Deloitte’s FSI Predictions 2024, there is a growing need for insurance products that cover the risks associated with AI itself. At present, the capacity for the AI insurance market is mostly in areas such as product liability for autonomous cars, but that is expected to change rapidly and exponentially. Insurers who move quickly to develop such products could establish themselves as leaders in this emerging area, but they must balance innovation with careful risk management.

At present, some insurers have developed specialty programs or coverage extensions to existing policies that provide specified coverage for AI tools. Most policies are still silent to this exposure, but it is expected that will change and exclusionary language will be introduced by some insurers, causing need for a specific coverage form just for AI risk. As we continue to immerse ourselves in ever-evolving AI applications it is important to recognize that usage of these models can create an array of legal liability implications for your business. I see the most immediate impacts to the PEO industry in areas of AI deployment for hiring, screening claimants and professional advice (Legal, HR, insurance).

SHARE


RELATED ARTICLES

RISK

TIME ON YOUR SIDE: FIVE SCRAPPY WAYS YOUR PEO CAN USE AI TO SHRINK THE GROUP HEALTH SALES CYCLE

In your group health sales cycle, time is of the essence. Shorter sales cycles generally lead to larger volumes, higher revenues, more satisfied account execs, and repeat customers, especially for an annual purchase like group health insurance. You can shrink the time you turn a lead into a customer by adding a speedy new member to your sales team: artificial intelligence. AI can help you close deals faster than your competitors can get their boots on.

BY Kaitlyn Fischer

September 2023

5 QUESTIONS TO ASK A CLOUD SERVICE PROVIDER ABOUT CYBERSECURITY

One of the questions I’m frequently asked by PEOs is simple: Is the cloud safe?  Actually, this is a trickier question than it seems. The answer is yes, of course, but like any internet-based endeavor, there are certainly many caveats. Cloud security requires you to think about security differently than on-premise security or data center security.

BY Dwayne Smith

March 2023

DISASTER RECOVERY FOR PEOS

Disasters are inevitable, and their timing is unpredictable. Preparing your company and employees before disaster strikes can make the difference between a catastrophe or an inconvenience. While no one wants to experience a business disruption, especially any technology-related disruption, there are many reasons that you could end up in that position.

BY Hamesh Chawla

March 2023

NEXT GENERATION PEO RISK DEPARTMENTS

As we all know, the year is 2023, and as PEO risk managers it is important that we embrace the title of a Bob Dylan classic: “The Times They Are A-Changin'.” Given the myriad of changing issues facing the PEO risk manager, a detailed point-by-point examination of the evolving issues would be too lengthy to illuminate within the pages of this article. That being said, this article will focus on two emerging and evolving issues that the PEO risk manager should embrace: dynamic risk analysis and next generation risk department staffing.  

BY Scott Johnson

April 2023

ADVERTISEMENT

Ad for Sentara Health Plans