August 2025
Artificial intelligence (AI) tools are changing how PEOs work. One growing trend is the use of AI notetakers—tools that can record, transcribe, and summarize meetings. They’re helpful because they let people stay focused on the conversation instead of scrambling to take notes. And if someone misses a meeting, they can still catch up on what was said.
But, before using AI to capture meetings, it’s critical to think through the risks. Privacy laws and data protection must be top of mind. Because PEOs support both businesses and employees, there’s added responsibility around how data is handled.
In this article, we’ll walk through the biggest issues to consider.
Meeting recording laws vary from state to state. In some states, only one party on the call needs to consent to a recording. In others, everyone must give permission. If someone records a meeting without meeting the legal consent standard, the organization could face legal action—and so could its clients.
Here’s what you should do:
It’s also wise to communicate with clients about whether and how recordings are made—especially in joint meetings or collaborative spaces.
AI notetakers can transcribe everything—names, payroll figures, strategies, performance data, and more. If any of that information counts as personally identifiable information (PII) or confidential business data, you must treat it carefully.
There are both federal and state laws that govern how employee and consumer information can be used, stored, and disposed of. These may include: consent and notification requirements, limits on how long data can be retained, and laws specifically protecting employee data (such as Social Security numbers or disciplinary records).
Some state laws are strict about handling employment data, even if they’re framed as “consumer data privacy” laws.
The regulatory environment is also expanding quickly. In 2024 alone, U.S. federal agencies introduced 59 AI-related regulations—more than double the number from 2023—and those regulations came from twice as many agencies.
This rapid pace of regulation makes it even more important for PEOs and their clients to stay on top of AI-related compliance—and to consult legal counsel to understand which laws apply based on their footprint and the types of data they manage.
It’s not enough to think about AI tools used by the company. Employees may also use AI to help with their work. That could include feeding information into public AI systems without realizing the risks.
Employers should decide their stance on employee use of AI. It could be:
Whatever the stance, be sure to: train employees on what’s considered confidential or proprietary, prohibit entering sensitive data into public tools, monitor use for compliance, and provide clear examples of what is acceptable and what is not.
AI transcription tools aren’t perfect. They may misinterpret accents, jargon, or industry-specific terms. They may generate summaries that are incomplete or misleading. In some cases, bias can sneak into outputs, even when demographic information isn’t included in the input.
PEOs should:
Errors in meeting notes or summaries can lead to serious misunderstandings—especially if decisions are being made based on inaccurate AI-generated content.
Once a meeting is recorded, the data doesn’t just disappear. You need to decide:
Depending on the content, there may be legal requirements around retention, destruction, or even access to logs. Loop in IT and legal teams to make sure data is being handled correctly.
Also, pay attention to who owns AI-generated content. Some AI vendors claim ownership rights, or at least shared rights. This could create legal problems down the line—especially if the output includes client or employee information.
If your organization or your clients are thinking about—or already using—AI notetakers, consider these steps.
Get clear on recording laws by state. Some states need all parties to consent to being recorded; others only need one. Make sure your team knows the difference—and help your clients understand it too. It’s the kind of thing that seems minor until someone makes a complaint or a lawsuit lands.
Educate clients on the risks of “open” AI tools. Platforms like Claude or ChatGPT might seem safe, but if someone enters private employee or client information, that data could be stored and used by the system indefinitely. That opens the door to serious privacy issues. If their team is using AI, they need clear guidance on what should be shared and what should not.
Help them build a policy. Most small businesses don’t have the time or know-how to write an AI use policy—but you can help with that. Even something basic is better than nothing: what tools they can use, what data they’re allowed to share, and who has access to AI-generated content.
Push for training. Ensure that both your employees and your clients’ teams know what “confidential” really looks like in day-to-day work. That includes not putting performance issues, payroll info, or business plans into AI tools that might store or share that data.
Talk about data storage and access. If meetings are being recorded, you need to know exactly where those recordings are stored, who has access to them, and how long they’re kept. This becomes especially crucial when the conversation includes sensitive or legal topics—because how and where that data is stored carries legal risk.
Loop in IT and legal support. Involve IT and legal early in the process. Many clients don’t have those resources in-house, so they may not know what to look out for.
AI tools can absolutely make work easier, but they need to have boundaries. As their PEO, you’re in the perfect spot to guide them—set expectations, share best practices, and even help them draft the policies they don’t know they need yet.
It’s one more way you show up as a partner, not just a provider. That kind of support builds trust and keeps clients with you for the long haul.
SHARE