AI GOVERNANCE: FEDERAL POLICY, STATE ACTION AND EMERGING RISK

BY PAUL HUGHES

Principal
Accretive Global Risk Advisors, LLC dba Libertate Insurance Services

March 2026

Artificial intelligence is here and deployed for a variety of business applications that impact PEO. No longer a future consideration for regulated industries, it is already influencing hiring models, underwriting/pricing/accrual models, claims profiling for fraud detection, compliance monitoring and other infrastructure needs in workforce analytics. What remains unsettled is not whether AI will be used, but how far it will go and who will ultimately govern it. The application and governance are not of only domestic importance, but an issue being debate at state and federal government levels. Each have been monitoring the need to build “guardrails” around AI with yet to be understood many positive applications, but potentially some nefarious ones as well.

FEDERAL ACTIONS POINT TOWARD LEGISLATIVE RESTRAINT AND AI INNOVATION

Under President Trump in both his first and second terms, the federal government has taken an “innovation first” strategy whereby regulatory obstacles in the path of AI growth are limited or neutralized altogether. Most recently, President Trump signed the “Removing Barriers to American Leadership in Artificial Intelligence” executive order in January of 2025 and then another order, “Ensuring a National Policy Framework for Artificial Intelligence,” in December of 2025. The first order in essence revoked President Biden’s executive order of December 2023 on “Safe, Secure and Trustworthy Development and Use of Artificial Intelligence” which was felt to be onerous and bureaucratically driven. The first line of the Biden order is “Artificial Intelligence holds extraordinary potential for both promise and peril.”

Federal messaging has focused on concerns that a patchwork of rules, especially that of fifty different states, could slow the goal of the United States at staying on the forefront of AI innovation and deployment, free of ideological bias or engineered social agendas. Any government-created disruption could stall investment and create uncertainty for companies operating across multiple jurisdictions.

A NATIONAL POLICY FRAMEWORK FOR ARTIFICIAL INTELLIGENCE

The most recent executive order is particularly focused on building one set of rules on a federal basis and eliminating each state’s ability to supersede them with their own. The order creates several sections containing provisions that will serve to ensure compliance by the states. I encourage you to read the full executive order.

STATE AI LEGISLATION

While no comprehensive federal AI framework has yet been finalized, it is in process, and the message from Washington has been consistent: centralized or minimal governance on a federal level is favoured over independent state action. That position, however, has not slowed activity at the state level to create laws deemed important by their various constituents. According to the National Conference of State Legislators, all 50 states, Puerto Rico, the Virgin Islands, and Washington, D.C., introduced legislation on the topic in 2025. Thirty-eight states adopted or enacted around 100 measures. Here are a few examples of those actions.

Arkansas

Arkansas enacted legislation that clarifies who the owner of AI generated content is, which includes the person who provides data or input to train a generative AI model or an employer, if the content is generated as a part of employment duties. The new law specifies that the generated content should not infringe on existing copyright or intellectual property rights.

Montana

Montana’s new “Right to Compute,” law sets requirements for critical infrastructure that is controlled by an AI system, such as instructing the deployer to develop a risk management policy that considers guidance from a list of specified standards, such as the latest version of the AI risk management framework from the national institute of standards and technology. The new law also specifies that the government cannot take actions that restrict the ability to privately own or make use of computational resources for lawful purposes, unless deemed necessary to fulfill a compelling government interest.

New Jersey

New Jersey adopted a resolution urging generative AI companies to make voluntary commitments regarding employee whistleblower protections.

New York

New York enacted a new law that requires state agencies to publish detailed information about their automated decision-making tools on their public websites through an inventory created and maintained by the Office of Information Technology. The new law also amends the civil service law to strengthen worker protections, such as requiring when an AI system is used by the state government that it cannot affect the existing rights of employees pursuant to an existing collective bargaining agreement and requiring that an AI system does not result in displacement or loss of a position.

North Dakota

North Dakota’s new law prohibits individuals from using an AI-powered robot to stalk or harass other individuals, expanding current harassment and stalking laws.

Oregon

Oregon enacted a new law that specifies that a non-human entity, including an agent powered by AI, cannot use specific licensed and certified medical professionals’ titles, such as a registered nurse and certified medication aide.

AI REGULATION

Additionally, groups such as the National Association of Insurance Commissioners (NAIC) have also weighed in heavily on the impact, positive and negative, of AI. Shortly after President Trump’s order in December of 2025, the NAIC issued their own to include “While AI offers transformative opportunities for insurers and policyholders, such as improving efficiency and enhancing customer experiences, the sweeping Executive Order creates significant unintended consequences. This could implicate routine analytical tools insurers use every day and prevent regulators from addressing risks in areas like rate setting, underwriting, and claims processing, even when no true AI is involved.”

WHERE AI INTERSECTS WITH PEOs

Most PEOs are not building AI platforms themselves. But AI is already embedded in the ecosystem PEOs operate within. It influences carrier underwriting, claims evaluation, fraud detection, hiring attributes, payroll analytics and compliance monitoring.

As these tools become more integrated into decision-making, the focus shifts from ownership of technology to governance of outcomes. Questions emerge around how AI-influenced insights are reviewed, validated and explained, particularly when they affect employment decisions or insurance-related results.

In a fragmented regulatory environment, those questions are likely to surface earlier and with greater scrutiny. Additionally, how does one protect themselves from claims that occur because of AI, who’s exposed to said claims and what insurance is available (if any) to cover it?

THE QUIET RISK OF REGULATORY DIVERGENCE

When standards differ across jurisdictions, documentation and oversight become more important. Decisions influenced by AI must be defensible, repeatable and transparent, regardless of where a client or workforce is located. This is where regulatory divergence creates quiet risk, not because rules are unclear, but because expectations are still forming. Whether states will be allowed to create their own rules and laws to support what their legislative bodies deem important versus a broad federal set of rules and laws that apply to all will be a significant battle. Of no surprise, California has dug in with Gavin Newsom signing SB 53 in September of 2025 with its purpose to establish “California as a world leader in safe, secure, and trustworthy artificial intelligence, creating a new law that helps the state both boost innovation and protect public safety.” It should be noted that 32 of the top 50 AI companies in the world are currently based in California.

PREPARING WITHOUT WAITING FOR CERTAINTY

AI governance is still taking shape at both the federal and state levels and there are sure to be many legal challenges between the federal and state governments on how it is applied and regulated. For PEOs, the issue is less about which framework ultimately prevails and more about operating in an environment where expectations may continue to shift, especially on a state-by-state basis. The only clear thing is the transformative nature of AI and that it will be a huge part of our lives going forward in business and life in general.

SHARE


RELATED ARTICLES

LEGAL - LEGISLATIVE

MEET CONGRESSWOMAN ERIN HOUCHIN

Voters in Indiana’s 9th Congressional district elected Congresswoman Erin Houchin to serve in the United States House of Representatives in November 2022. In doing so, Rep. Houchin became the first woman elected to Congress from her district. She also holds the distinction of being the only person elected to Congress who has worked for a PEO.Rep. Houchin spoke to PEO Insider about her decision to seek public office, her experience working for a PEO, and the policies she champions.

BY Chris Chaney

May 2023

2023 DIGITAL TRENDS

Lorem ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry’s standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into …

BY James Joyce

June/July 2023

CLIENT-LEVEL FINANCIAL ANALYSIS

If you asked someone in the PEO space what he or she thought of actuarial science a positive response might be reserve analyses or accruals. A negative response might be collateral calls or rate increases. Naturally, the varied reactions stem from whether there is positive or negative news coming from the work of the actuary. Yet, one of the most helpful projects an actuary can perform for a PEO, eliciting either positive and negative reactions, is a client-level financial analysis.  

BY FRANK HUANG

June/July 2023

PROFITABILITY ABCs: IT IS AS EASY AS 1-2-3

The article provides some simple guidance for streamlining operations (thus reducing selling, general, and administrative (SGA) costs) and increasing gross profit contribution from their existing client base. For the purpose of this article, we are only exploring pricing strategies that affect client profitability and operating efficiency items that impact select SG&A cost categories. Business development and organic growth are excluded from this discussion.  

BY Dan McHenry

June/July 2023

ADVERTISEMENT

Ad for Sentara Health Plans