In 2022, Americans saw the promises and risks of artificial intelligence (AI) expand rapidly. Schools began piloting AI-powered tutoring, employers used algorithms in hiring and performance decisions, and families worried about increasingly convincing deepfakes. Policymakers faced a central question: How can society capture AI’s benefits without undermining privacy, fairness, and economic stability?
That challenge has only grown. AI capabilities are advancing faster than legal and regulatory frameworks can adapt, producing a fragmented governance landscape. States are moving aggressively to fill gaps, while federal action remains incremental and sector specific. A workable path forward must balance these priorities: protecting civil rights, workers, and consumers without stifling innovation or competitiveness.
Privacy and Data Protection. AI systems rely on vast quantities of data, sometimes personal or sensitive. Regulators are therefore focusing on informed consent, data minimization, cybersecurity, and limits on secondary data use. High-risk categories such as health, education, and financial data receive particular scrutiny because misuse can create lasting harm.
Employment Law. AI is now embedded in hiring, promotion, and performance management. While these tools can improve efficiency, they may also amplify bias, especially when trained on historical data. Regulators are testing how current employment and civil rights laws apply to automated decision-making, including whether transparency and human-oversight requirements are sufficient.
Job Displacement and Economic Inequality. AI-driven automation will displace workers across many industries. While new jobs emerge, the transition may be disruptive. Public policy has yet to address the need for large-scale reskilling, job-transition support, and modernization of safety nets, which raises the risk of widening economic inequality.
Environmental Impact. Training and deploying advanced AI models requires substantial computing power, driving significant energy demand. Data centers, cloud infrastructure, and specialized hardware are increasing concerns about emissions and resource consumption. Policymakers are exploring energy-transparency requirements and incentives for more sustainable AI development.
State Action
Absent comprehensive federal legislation, states have taken the lead. The National Conference of State Legislatures reports that in 2025, more than 100 AI-related bills addressing transparency, privacy, discrimination, and workforce impacts were adopted or enacted across 38 states.
California has taken the approach of combining frontier-model safety measures with transparency rules. Requirements include safety testing, critical-incident reporting, and disclosure obligations under California Assembly Bill 2013. California has also issued Automated Decision-Making Technology regulations, effective January 1, 2027, that require notice, opt-out rights, and risk assessments for significant automated decisions that lack meaningful human review.
Other examples include Colorado and New York. The Colorado AI Consumer Protection Act regulates “high-risk” AI systems and requires transparency and risk mitigation in sectors including employment, housing, healthcare, and financial services. New York has proposed the Responsible AI Safety and Education Act, which would impose safety, transparency, and incident-reporting requirements on developers of advanced AI systems.
Federal Action
The United States still lacks a comprehensive AI statute and a national privacy law that would address AI privacy concerns. Instead, agencies and organizations must rely on and adapt existing frameworks—such as Health Insurance Portability and Accountability Act and Gramm-Leach-Bliley Act— to AI use cases. Targeted sector-specific efforts include the TAKE IT DOWN Act, which criminalizes non-consensual AI-generated intimate imagery and requires its removal from platforms. In 2025, President Trump issued an executive order aimed at creating a more unified national approach, including the potential preemption of certain state laws and the formation of a federal AI task force. Congress has also considered bipartisan proposals such as the Algorithmic Accountability Act of 2025, which would require impact assessments and disclosures for high-stakes automated decision-making systems.
Professional Employer Organizations (PEOs) operate at the intersection of human resources, payroll, benefits, and compliance, making them uniquely exposed to both the opportunities and risks of AI adoption.
AI-Driven Efficiency. AI enables PEOs to automate high-volume administrative processes such as payroll, onboarding, and benefits administration. Intelligent systems can validate data, detect anomalies, and reduce manual errors, improving both efficiency and service quality. However, these gains must be balanced with transparency and privacy requirements, especially when automated systems affect employee outcomes.
Compliance and Risk Management. Because PEOs serve multiple clients across jurisdictions, they face heightened compliance complexity. AI can assist by monitoring regulatory changes, flagging potential violations, and standardizing compliance workflows. At the same time, the use of AI itself introduces new regulatory exposure, requiring PEOs to carefully vet vendors, document decision processes, conduct impact assessments, and stay informed on the evolving legal and regulatory framework.
Data Governance and Privacy. PEOs often handle sensitive employee data, including health, financial, and personal information. AI-driven insights must be managed within privacy and security frameworks. Compliance with laws such as HIPAA and GLBA is essential, as is alignment with emerging state-level AI and privacy requirements.
If implemented responsibly, AI positions PEOs to evolve from administrative service providers into strategic workforce advisors. By leveraging predictive analytics and real-time insights, PEOs can help clients make more informed decisions about hiring, retention, compensation, and organizational design, creating a competitive advantage in a rapidly changing labor market. AI should be leveraged as a force maximizer, within appropriate boundaries.
Responsibly maximizing the strategic opportunities AI brings will require government and other stakeholders to:
AI is advancing faster than the law, leaving behind a patchwork of state initiatives and federal stopgaps. A durable regulatory approach must combine clear national standards with the flexibility to learn from state-level experimentation. For industries like PEOs – where AI intersects directly with employment, privacy, and compliance – the stakes are high. Thoughtful governance can ensure that innovation continues with appropriate safeguards.
SHARE