
AI in the Workplace: A Strategic Guide for California Employers in 2026
A Guide for Employers
Prepared by: Zaller Law Group, PC
Date: December 2025
Table of Contents
Executive Summary
The Cultural Shift: From “Do Not Use” to “Learn How to Use”
Transforming Employee Training and Communication
The Data Security Imperative: California-Specific Considerations
Marketing to Your Most Important Audience: Your Employees
The Recording Revolution: Navigating California’s Two-Party Consent Law
The 80/20 Rule: AI as Assistant, Not Replacement
The Future of Legal Services: What Clients Should Expect
Conclusion: The Choice Is Yours
About Zaller Law Group
Executive Summary
Artificial Intelligence is no longer a futuristic concept—it’s reshaping California workplaces today. The question isn’t whether to adopt AI, but how to implement it strategically while navigating California’s complex employment law landscape.
This white paper distills key insights from leading brand strategist Sasha Strauss and employment attorney Anthony Zaller to provide California employers with a practical roadmap for leveraging AI in 2026. Rather than viewing AI adoption through the lens of risk avoidance alone, forward-thinking employers are discovering how AI can simultaneously reduce litigation exposure, enhance employee engagement, and create competitive advantages.
Key Takeaway: The employers who thrive in 2026 won’t be those who ban AI or those who adopt it recklessly. They’ll be the ones who embrace it strategically, with clear boundaries and human oversight.
The Cultural Shift: From “Do Not Use” to “Learn How to Use”
The Reality Check
Here’s what many California employers don’t realize: your employees are already using AI—and not just for work. According to Harvard Business Review analysis, the number one use of ChatGPT globally is therapy. Your team members are consulting AI about workplace conflicts, drafting communications, and making decisions based on AI-generated advice, often without your knowledge.
The cautionary tale that emerges from this reality is sobering. Employees ask AI questions like “Was she allowed to speak to me that way?” and receive responses filled with legalese that they then weaponize in workplace communications. This creates “false anger” and “unjustified dialogue” that might never have materialized without AI amplification.
The New Approach: Rather than prohibiting AI use (which drives it underground), establish clear guidelines and training. Help employees understand when and how to use AI appropriately, and crucially, how to avoid simply copying and pasting aggressive AI-generated responses.
Action Item #1: Implement an AI Use Policy with Training
What to Do:
- Acknowledge that employees will use AI tools in their personal and professional lives
- Create clear policies about when AI can and cannot be used for work purposes
- Train employees on a simple principle: “Before you copy and paste any AI-generated communication, ask yourself: Is this what you really want to say? Can you ask the bot to ‘say this in a kinder way’?”
- Establish “Do Not Learn” protocols for sensitive information (more on this below)
Why It Matters: This proactive approach prevents the underground use that leads to compliance issues while positioning your organization as innovative rather than reactionary.
Transforming Employee Training and Communication
Making Policies More User Friendly
California employers face a unique challenge: workforce diversity. Your employees may speak different languages, come from different educational backgrounds, and have varying levels of familiarity with legal terminology. Traditional employee handbooks and training materials often fail to connect with this diverse audience.
As Sasha Strauss noted during the panel: “We’ve been training our employees in our way of speaking with all this legalese. That’s unkind. Not nice.”
AI offers an unprecedented solution. The same employee handbook can be translated not just into different languages (Spanish, Mandarin, Tagalog), but also into different tones, reading levels, and even formats that engage different learning styles.
As Anthony Zaller noted, “I like developing a project telling AI, ‘Here’s our employee handbook. Based on this dataset, you can only refer to this employee handbook.’ Now your managers can ask questions about the employee handbook and get helpful insights when a question is raised.”
The Power of Accessible Training
Real-World Example: Strauss described a student with limited mobility who was unable to take traditional notes in class. With AI transcription and analysis tools, she had “the best notes of any student in class and was the most confident.” This isn’t just about accommodation—it’s about removing barriers that shouldn’t exist in the first place.
Action Item #2: Revolutionize Your Employee Training Materials
What to Do:
- Create a Master Dataset: Upload your employee handbook, policies, and training materials to a secure AI platform (like Microsoft Copilot for enterprise, which keeps data within your organization).
- Build an Interactive Training System: Allow managers and employees to ask questions about policies and receive clear, accurate answers based solely on your approved materials—not random internet sources.
- Translate and Customize: Use AI to create multiple versions of the same content in different languages, reading levels, tones, and formats.
- Add Engagement Elements: Consider Strauss’s innovative approach of asking AI to incorporate album titles, sports references, or other cultural touchpoints that make dry policy material more engaging.
- Create Regular Micro-Training: Develop 5-minute training videos showing best practices. Film these on an iPhone if needed—authenticity matters more than production value.
The Legal Benefit: When employees truly understand policies because they’re presented in accessible formats, compliance improves. This directly reduces litigation risk.
The Data Security Imperative: California-Specific Considerations
Understanding the “Do Not Learn” Principle
One of the most critical questions raised during the panel concerned data privacy: “A lot of these AI platforms store or use the data that you have included, and they use it as training models. If you have business secrets, how do you manage that?”
The answer lies in understanding AI boundaries and California’s strict data privacy laws.
Action Item #3: Establish Secure AI Protocols
What to Do:
- Enterprise-Grade Solutions: For any work involving confidential data, use enterprise AI platforms that contractually guarantee they won’t use your data for training and that the information is kept confidential. Options include Microsoft Copilot and some of the other model’s enterprise offerings.
- Enable “Do Not Learn” Settings: On all AI platforms, toggle the settings to prevent the AI from learning from your queries. This is typically found in privacy or data usage settings.
- Create Separate AI Workspaces: Consider maintaining different AI accounts for different purposes: one for general research, one for internal data, one for confidential information.
- Data Anonymization Workflows: Before using public AI tools, remove employee names, Social Security numbers, specific company details, customer information, and trade secrets.
California Compliance Note: Remember that California’s Consumer Privacy Act (CCPA) and Privacy Rights Act (CPRA) impose strict requirements on how you handle personal information. Using AI platforms that learn from employee data could create compliance issues.
Marketing to Your Most Important Audience: Your Employees
The Paradigm Shift
Most employers think about marketing externally—to customers, clients, prospects. But as Strauss emphasized during the panel: “Marketing to your employees may be more important than marketing to your customers.”
This isn’t just brand strategy; it’s risk management. In brand strategy circles, there’s a rule: if a company wants to spend $1 million on a rebrand, they must commit to spending the same amount training employees on how to use and live that brand. Otherwise, the project will fail.
The same principle applies to employment law compliance. You can have the world’s best policies, but if employees don’t understand them, feel engaged with them, or know how to implement them, you’re still exposed to litigation risk.
Action Item #4: Develop an Internal Content Strategy
What to Do:
- Shift Your Mindset: Stop thinking about compliance training as a necessary evil. Start thinking about it as internal marketing that builds culture, reduces turnover, and prevents litigation.
- Create Regular Content: Develop a calendar of weekly or monthly micro-content including training videos, success stories, policy updates, recognition programs, and scenario tips.
- Use AI to Generate Variations: Record one manager demonstrating excellent customer service. Use AI to transcribe it, create a training guide, generate quiz questions, translate it, create an audio version, and draft follow-up emails.
- Make It Engaging: Use gamification strategies like hiding “Easter eggs” in employee handbooks that employees can find for small rewards. This encourages actual engagement with policies rather than cursory sign-offs.
- Measure Engagement: Track which employees watch training videos, complete quizzes, or participate in recognition programs. Tie small incentives to participation.
The Litigation Prevention Angle: When employees feel valued, heard, and well-trained, they’re less likely to file complaints or lawsuits. They’re also better equipped to avoid creating liability through their actions.
The Recording Revolution: Navigating California’s Two-Party Consent Law
The Uncomfortable Truth
California Labor Code Section 632 requires all-party consent for recording confidential communications. This creates tension with emerging workplace technologies: Meta glasses, smartphone recording apps, AI transcription bots in Zoom meetings, and wearable recording devices.
The panel discussion revealed a surprising evolution in thinking. Attorney Zaller admitted: “Up until a few months ago, my instinct was ‘No. Tell them no. Have a policy against this.’ But I’m starting to change my mind.”
“Up until a few months ago, my instinct was ‘No. Tell them no. Have a policy against recording in the workplace.’ But I’m starting to change my mind…I’m 50/50 on whether we just record everything in the workplace as employers and allow everybody to record everything, or just say no at this point.” —Anthony Zaller
Why the Shift?
Several factors are driving this reconsideration:
- Inevitability: People are recording anyway. Prohibiting it just drives the behavior underground.
- Protection: The LAPD’s experience with body cameras is instructive. Initially, officers resisted. Now they refuse to work without them because cameras protect officers from false allegations as often as they document misconduct.
- Evidence Availability: In litigation, secret recordings frequently surface. A judge who hears a recording can’t “un-ring that bell,” even if it was illegally obtained.
- Transparency Benefits: When everyone knows recording might occur, behavior improves across the board—from managers to front-line staff to customers.
Action Item #5: Develop a Thoughtful Recording Policy
What to Do:
Option A: Limited Recording (Traditional Approach)
- Prohibit unauthorized recording in the workplace
- Establish exceptions for training purposes, security cameras, manager-authorized documentation, and accessibility needs
- Train managers to recognize potential violations
- Include clear consequences in policy
Option B: Transparent Recording Environment (Emerging Approach)
- Notify all employees that “work activities may be recorded for quality, training, and documentation purposes”
- Establish that by working for the company, employees consent to potential recording
- Create clear guidelines about what can/cannot be recorded and how recordings may be used
- Provide opt-out mechanisms for employees with legitimate concerns
Option C: Hybrid Approach
- Allow recording in customer-facing areas and during formal meetings
- Restrict recording in break rooms, private offices, and informal conversations
- Require notice when recording is occurring
Critical: Post clear signage and include policy acknowledgments in employee handbooks. However, consult with employment counsel before implementing any recording policy.
The 80/20 Rule: AI as Assistant, Not Replacement
Managing Expectations
AI can typically get you 80-90% of the way to a finished product, but human expertise remains essential for catching errors, adding nuance, making strategic decisions, ensuring legal compliance, and adding authenticity.
“I don’t trust AI fully at this point, as it can hallucinate a lot. But it can get you 80% of the way there. And what I’d like to see when a client has a question for us, they have used AI to do a lot of the work already and it’s just verifying something or taking it the next step.” —Anthony Zaller
Action Item #6: Establish Quality Control Processes
What to Do:
- Never Auto-Send AI Content: Always review AI-generated materials before using them. Budget for this review time.
- Create Feedback Loops: When you revise AI-generated content, feed your final version back to the AI. This teaches it your preferences and improves future outputs.
- Subject Matter Expert Review: For technical or legal content, always have a qualified expert review AI outputs.
- Test and Iterate: Before rolling out AI-generated training materials company-wide, test them with focus groups.
- Document Your Process: Maintain records showing that AI was used as a tool with human oversight, not as the decision-maker.
The “Good Parent” Analogy: As Strauss noted: “The bot can only be as good as what you put into it.” Think of AI as a highly capable but inexperienced assistant that needs clear instructions, the right source materials, appropriate boundaries, and careful review.t
The Future of Legal Services: What Clients Should Expect
A More Efficient Model
AI is fundamentally changing the economics and delivery of legal services. As Zaller explained: “Your lawyer should be using AI to help bill clients more efficiently. I think the days of mega law firms are over. If you have 10 lawyers working on an employment law case, there is likely to be a lot of inefficiencies. Most cases now only really require two to three experienced lawyers. A small team of experienced lawyers can handle almost any employment law matter now with AI.”
Example: How AI Can Transform Wage-and-Hour Risk Analysis
Consider a typical class action or PAGA matter involving time and payroll data for 1,000 to 3,000 employees. Traditionally, analyzing this volume of information—meal and rest break records, time punches, pay codes, variance reports—could take a legal team of four paralegals several weeks to process. Much of that time is spent manually reviewing timecards, reconciling payroll entries, scanning PDFs, or attempting to interpret screenshots from payroll systems that don’t export clean data.
This delay increases litigation costs, slows strategic decision-making, and makes it difficult for employers to understand their true exposure early in the case.
In 2025, Zaller Law Group began developing AI-powered software that reverses this problem entirely. Instead of weeks of manual work, the system can now ingest the same data— even if provided only as screenshots—organize it, and analyze for potential violations in a matter of hours. The tool identifies missed breaks, short shifts, split shifts, off-the-clock indicators, pay inconsistencies, and other wage-and-hour issues with far greater speed and clarity.
This kind of AI-assisted analysis doesn’t replace legal judgment—it magnifies it. By giving attorneys accurate, structured data early in a case, employers can:
- Understand exposure before plaintiffs define the narrative
- Strategically assess settlement ranges
- Identify systemic compliance issues
- Make real-time operational corrections that reduce future claims
This example illustrates how AI can fundamentally shift compliance from reactive to proactive—one of the most impactful opportunities available to California employers in 2026.
Limiting Liability Through Demonstrated “Reasonable Efforts” Under the PAGA Reforms
Beyond accelerating data analysis, AI-powered wage-and-hour review tools offer a critical legal advantage under California’s 2024 PAGA reforms. Beginning in 2024, employers who can show they made “reasonable efforts” to comply with the Labor Code may qualify for a significantly reduced 15% PAGA penalty cap. Demonstrating these efforts can required extensive manual audits, paper records, and inconsistent documentation that can make it difficult for employers to prove proactive compliance.
By contrast, the new AI software Zaller Law Group is helping develop allows employers to run regular, automated audits of timekeeping and payroll data, flagging potential violations such as late meals, short breaks, missed premiums, or pay inconsistencies before they escalate into litigation. Each audit generates a timestamped record showing the employer examined the data, identified issues, and took corrective action. This creates a clear, defensible trail of compliance efforts.
In a PAGA case, the ability to produce these automated audit reports is powerful evidence that the employer took meaningful steps to follow the law. It strengthens arguments for penalty reductions, supports early resolution, and can dramatically reduce overall exposure. For many employers, adopting AI-driven compliance tools in 2026 will not only improve operations—it will materially reshape litigation outcomes.
But Human Expertise Remains Essential
Despite these efficiencies, Zaller emphasizes that AI cannot replace attorney judgment: “It’s 90% there, but here’s a nuance to it. And so AI is not going to replace attorneys yet, but it can make attorneys super-efficient, where you don’t have to pay five hours for research now that can be done in 30 minutes.”
The key is understanding where AI excels (data processing, pattern recognition, initial research) and where human expertise remains irreplaceable (nuanced legal analysis, strategic decision-making, understanding client-specific context, and navigating gray areas in California employment law).
Essential Do’s and Don’ts
DO:
- Train employees on appropriate AI use rather than banning it
- Use enterprise-grade AI platforms for confidential information
- Enable “Do Not Learn” settings on all AI platforms
- Anonymize data before using public AI tools
- Review all AI output before using it
- Feed your final versions back to AI to improve its learning
- Set clear boundaries about what can go into AI
- Maintain human oversight of all employment decisions
- Consult with employment counsel before implementing AI for hiring, evaluation, or termination decisions
- Think of AI as a tool to augment humans, not replace them
DON’T:
- Don’t use AI output without review – it can hallucinate or miss nuances
- Don’t upload confidential data to public AI platforms – use secure, enterprise versions
- Don’t trust AI for nuanced legal analysis – use it for drafts, not final legal advice
- Don’t ignore California’s two-party consent law – develop thoughtful recording policies
- Don’t make AI-driven employment decisions without human review
- Don’t copy-paste AI communications without adding your authentic voice
- Don’t forget to update policies – AI capabilities and risks evolve rapidly
- Don’t assume one-size-fits-all – customize AI approach to your industry and workforce
- Don’t wait for perfect – start with pilots and iterate
Conclusion: The Choice Is Yours
AI adoption in the workplace isn’t a future scenario—it’s happening now. Your employees are using it, your competitors are leveraging it, and your customers expect the service levels it enables.
The question isn’t whether AI will transform your workplace. It’s whether you’ll be proactive or reactive in managing that transformation.
Forward-thinking California employers are discovering that AI, implemented thoughtfully with appropriate safeguards, can simultaneously reduce litigation exposure, increase employee engagement, improve efficiency, create competitive advantages, and build stronger workplace culture.
As Sasha Strauss emphasized during the panel: “If we want to stay in business, we have to be at the forefront of this transformation, not on the back end.”
AI won’t replace the human element of employment—but it can help you be more human, not less. It can free your team from rote tasks to focus on relationships, strategy, and the work that truly requires human judgment.
The employers who thrive in 2026 and beyond won’t be those with the most advanced AI. They’ll be those who use AI most thoughtfully, maintaining their humanity while leveraging technology’s power.
The time to start is now.
About Zaller Law Group
Zaller Law Group specializes in helping California employers navigate complex employment law challenges while building stronger workplaces. Our team stays at the forefront of emerging issues, including AI implementation, to provide practical guidance that protects our clients while positioning them for success.
For assistance with AI policies, implementation strategy, or any employment law matter, contact us at www.zallerlaw.com.
***
This white paper is provided for informational purposes only and does not constitute legal advice. Employers should consult with qualified employment counsel regarding specific situations and compliance questions.
© 2025 Zaller Law Group. All rights reserved.

