REGULATING THE FUTURE: CHANNELING AI FOR GOOD

BY STEPHEN CALVERT, ESQ.

Chief Legal Officer, General Counsel

G&A Partners

BY IRIS GONZALEZ

Deputy General Counsel

G&A Partners

May 2026

 

In 2022, Americans saw the promises and risks of artificial intelligence (AI) expand rapidly. Schools began piloting AI-powered tutoring, employers used algorithms in hiring and performance decisions, and families worried about increasingly convincing deepfakes. Policymakers faced a central question: How can society capture AI’s benefits without undermining privacy, fairness, and economic stability?

That challenge has only grown. AI capabilities are advancing faster than legal and regulatory frameworks can adapt, producing a fragmented governance landscape. States are moving aggressively to fill gaps, while federal action remains incremental and sector specific. A workable path forward must balance these priorities: protecting civil rights, workers, and consumers without stifling innovation or competitiveness.

KEY CONCERNS

Privacy and Data Protection. AI systems rely on vast quantities of data, sometimes personal or sensitive. Regulators are therefore focusing on informed consent, data minimization, cybersecurity, and limits on secondary data use. High-risk categories such as health, education, and financial data receive particular scrutiny because misuse can create lasting harm.

Employment Law. AI is now embedded in hiring, promotion, and performance management. While these tools can improve efficiency, they may also amplify bias, especially when trained on historical data. Regulators are testing how current employment and civil rights laws apply to automated decision-making, including whether transparency and human-oversight requirements are sufficient.

Job Displacement and Economic Inequality. AI-driven automation will displace workers across many industries. While new jobs emerge, the transition may be disruptive. Public policy has yet to address the need for large-scale reskilling, job-transition support, and modernization of safety nets, which raises the risk of widening economic inequality.

Environmental Impact. Training and deploying advanced AI models requires substantial computing power, driving significant energy demand. Data centers, cloud infrastructure, and specialized hardware are increasing concerns about emissions and resource consumption. Policymakers are exploring energy-transparency requirements and incentives for more sustainable AI development.

THE STATE AND FEDERAL PATCHWORK

State Action

Absent comprehensive federal legislation, states have taken the lead. The National Conference of State Legislatures reports that in 2025, more than 100 AI-related bills addressing transparency, privacy, discrimination, and workforce impacts were adopted or enacted across 38 states.

California has taken the approach of combining frontier-model safety measures with transparency rules. Requirements include safety testing, critical-incident reporting, and disclosure obligations under California Assembly Bill 2013. California has also issued Automated Decision-Making Technology regulations, effective January 1, 2027, that require notice, opt-out rights, and risk assessments for significant automated decisions that lack meaningful human review.

Other examples include Colorado and New York. The Colorado AI Consumer Protection Act regulates “high-risk” AI systems and requires transparency and risk mitigation in sectors including employment, housing, healthcare, and financial services. New York has proposed the Responsible AI Safety and Education Act, which would impose safety, transparency, and incident-reporting requirements on developers of advanced AI systems.

Federal Action

The United States still lacks a comprehensive AI statute and a national privacy law that would address AI privacy concerns. Instead, agencies and organizations must rely on and adapt existing frameworks—such as Health Insurance Portability and Accountability Act and Gramm-Leach-Bliley Act— to AI use cases. Targeted sector-specific efforts include the TAKE IT DOWN Act, which criminalizes non-consensual AI-generated intimate imagery and requires its removal from platforms. In 2025, President Trump issued an executive order aimed at creating a more unified national approach, including the potential preemption of certain state laws and the formation of a federal AI task force. Congress has also considered bipartisan proposals such as the Algorithmic Accountability Act of 2025, which would require impact assessments and disclosures for high-stakes automated decision-making systems.

AI IN PROFESSIONAL EMPLOYER ORGANIZATIONS (PEOS)

Professional Employer Organizations (PEOs) operate at the intersection of human resources, payroll, benefits, and compliance, making them uniquely exposed to both the opportunities and risks of AI adoption.

AI-Driven Efficiency. AI enables PEOs to automate high-volume administrative processes such as payroll, onboarding, and benefits administration. Intelligent systems can validate data, detect anomalies, and reduce manual errors, improving both efficiency and service quality. However, these gains must be balanced with transparency and privacy requirements, especially when automated systems affect employee outcomes.

Compliance and Risk Management. Because PEOs serve multiple clients across jurisdictions, they face heightened compliance complexity. AI can assist by monitoring regulatory changes, flagging potential violations, and standardizing compliance workflows. At the same time, the use of AI itself introduces new regulatory exposure, requiring PEOs to carefully vet vendors, document decision processes, conduct impact assessments, and stay informed on the evolving legal and regulatory framework.

Data Governance and Privacy. PEOs often handle sensitive employee data, including health, financial, and personal information. AI-driven insights must be managed within privacy and security frameworks. Compliance with laws such as HIPAA and GLBA is essential, as is alignment with emerging state-level AI and privacy requirements.

RESPONSIBLY MAXIMIZING AI’S POTENTIAL

If implemented responsibly, AI positions PEOs to evolve from administrative service providers into strategic workforce advisors. By leveraging predictive analytics and real-time insights, PEOs can help clients make more informed decisions about hiring, retention, compensation, and organizational design, creating a competitive advantage in a rapidly changing labor market. AI should be leveraged as a force maximizer, within appropriate boundaries.

Responsibly maximizing the strategic opportunities AI brings will require government and other stakeholders to:

  • Develop cross-sector standards for transparency, safety testing, and accountability.
  • Create a national privacy baseline to reduce fragmentation and build public trust.
  • Invest in reskilling, upskilling, and economic transition programs.
  • Require monitoring of energy use associated with large-scale AI systems.
  • Establish an independent body to investigate incidents and oversee high-risk AI.
  • Include industry, academia, and communities in shaping AI governance.

AI is advancing faster than the law, leaving behind a patchwork of state initiatives and federal stopgaps. A durable regulatory approach must combine clear national standards with the flexibility to learn from state-level experimentation. For industries like PEOs – where AI intersects directly with employment, privacy, and compliance – the stakes are high. Thoughtful governance can ensure that innovation continues with appropriate safeguards.

SHARE


RELATED ARTICLES

LEGAL - LEGISLATIVE

MEET CONGRESSWOMAN ERIN HOUCHIN

Voters in Indiana’s 9th Congressional district elected Congresswoman Erin Houchin to serve in the United States House of Representatives in November 2022. In doing so, Rep. Houchin became the first woman elected to Congress from her district. She also holds the distinction of being the only person elected to Congress who has worked for a PEO.Rep. Houchin spoke to PEO Insider about her decision to seek public office, her experience working for a PEO, and the policies she champions.

BY Chris Chaney

May 2023
LEGAL - LEGISLATIVE

NAPEO ADVOCACY DAY IS A HOME RUN

There's an energy around the PEO industry this year that's palpable. Nowhere is that more true than in Washington DC, where we are starting to make our mark as a strong contributor to the vitality and success of the backbone of the economy: small and mid-size businesses. We've got a great story to tell. Help us tell it.

BY THOM STOHLER

August 2023

THINK IT THROUGH: HOW RETURN-TO-OFFICE MANDATES MAY IMPACT EMPLOYEE ENGAGEMENT

As a result of the workforce evolution in recent years, remote, hybrid and onsite work has been redefined, and is a top-of-mind subject in daily conversations. Many companies and teams like ours at LandrumHR have an employee base geographically widespread throughout the U.S. In our case, this pre-dates the pandemic, but like these other companies we, too, are still evaluating the pros and cons to re-engaging teams physically onsite where and when possible, without causing disruption to workflow and requiring facilities (re)construct.

BY Gehan "G" Haridy-Ardanowski

February 2023

STAY INFORMED: RECENT LEGAL DEVELOPMENTS MAY IMPACT EMPLOYERS’ USE OF ARBITRATION IN EMPLOYMENT CLAIMS

Use of arbitration and class-action waiver agreements allows for the private resolution of employment claims on an individual basis. While arbitration is not a low-cost alternative, it can be a very strong hedge against runaway jury awards and swollen class-action damages.  

BY STEPHEN CALVERT, ESQ.

May 2023

ADVERTISEMENT

Ad for Sentara Health Plans