Artificial Intelligence (AI) has permeated our personal and professional lives in recent times in a big way with broad application. Some of the functionality of AI is easy to understand, in no way controversial and helpful in our daily lives. For example, if you bought this on Amazon, you probably may want to buy that as well. If your team is up by a certain score at a certain time, the chances to win are understood on a percentage basis based on all the outcomes of the past in that sport in similar situations. Both useful and innocuous applications of AI have advanced if you understand that for it to work, you may compromise your own behavioral data to use some applications. Algorithms need to understand ‘you’ against historical patterns to predict against the past and that can be intrusive to some.
A tort (the French word for wrong) is an act that involves a breach of a civil duty owed to someone else other than a breach of contract. Torts include all negligence cases as well as intentional wrongs which result in some form of harm. Tort law defines what is a legal injury and therefore, whether a person, an entity or its agents may be held legally liable for an injury they have caused, whether accidental or deliberate.
There are applications such as driving a car that are more intensely scrutinized due to the potential physical harm caused by AI and the ensuing legal liability that results. The National Transportation Safety Board (NTSB) recently received a letter from six senators imploring more investigation into the carmakers that have adopted AI technology. In rapid succession, the National Highway Traffic Safety Administration (NHTSA) has opened investigations into almost all the major companies testing autonomous vehicles as well as those that offer advanced driver-assist systems in their production cars. Tesla, Ford, Waymo, Cruise, and Zoox are all being probed for alleged safety lapses, with the agency examining hundreds of crashes, some of which have been fatal.
In general, AI comes with inherent risks from an insurance perspective. According to the National Association of Insurance Commissioners (NAIC) in their model bulletin regarding the use of AI for the insurance industry: “AI may facilitate the development of innovative products, improve consumer interface and service, simplify and automate processes, and promote efficiency and accuracy. However, AI, including AI Systems, can present unique risks to consumers, including the potential for inaccuracy, unfair discrimination, data vulnerability, and lack of transparency and explainability. Insurers should take actions to minimize these risks.”
As of this month, seventeen state insurance departments have adopted the NAIC model bulletin and four have passed laws restricting the use of AI. As an example, New York has restricted the use of any Automated Employment Decision Tool (AEDT) unless it has been tested for bias in the hiring process.
What other risks could AI create for your PEO? Here are a few areas you should pay attention to:
Are you covered for all this? Probably for now, but I’d check with your broker and review policies if you are deploying AI in a manner that could bring potential liability to the business. Most PEOs have non-admitted liability policies that are manuscript in nature and are not standardized. In layman’s terms, each policy is written different and can either include the exposure to legal liability as a “named peril,” exclude the exposure or be silent to it. We see most insurers at present taking the silent approach, but it is my opinion that this will change quickly as AI cases are brought along with ensuing covered losses. The increase in usage of AI products will only increase the legal liability that they can create, which leads to another opportunity.
As noted in a recent article as a part of Deloitte’s FSI Predictions 2024, there is a growing need for insurance products that cover the risks associated with AI itself. At present, the capacity for the AI insurance market is mostly in areas such as product liability for autonomous cars, but that is expected to change rapidly and exponentially. Insurers who move quickly to develop such products could establish themselves as leaders in this emerging area, but they must balance innovation with careful risk management.
At present, some insurers have developed specialty programs or coverage extensions to existing policies that provide specified coverage for AI tools. Most policies are still silent to this exposure, but it is expected that will change and exclusionary language will be introduced by some insurers, causing need for a specific coverage form just for AI risk. As we continue to immerse ourselves in ever-evolving AI applications it is important to recognize that usage of these models can create an array of legal liability implications for your business. I see the most immediate impacts to the PEO industry in areas of AI deployment for hiring, screening claimants and professional advice (Legal, HR, insurance).
SHARE