AI Is Reshaping the American Workplace
Artificial intelligence is no longer a futuristic concept — it is deeply embedded in the modern workplace. From resume-screening algorithms that decide which candidates get interviews to productivity monitoring software that tracks every keystroke, AI is making decisions that directly affect workers' livelihoods. In 2026, the pace of AI adoption in employment has accelerated dramatically, raising urgent questions about fairness, accountability, and worker rights.
For employees, the implications are profound. An algorithm might determine whether you get hired, how your performance is evaluated, whether you receive a promotion, or even whether you are terminated. These decisions, which were once made by human managers with whom you could reason and negotiate, are increasingly being delegated to opaque software systems that operate without transparency or accountability.
At Zaghi & Chrzan, LLP, we are closely tracking these developments because they represent the next frontier of employment law. Understanding how AI intersects with existing legal protections is essential for any worker navigating today's job market.
AI-Powered Hiring Tools and Algorithmic Bias
One of the most widespread uses of AI in employment is in the hiring process. Companies are using AI-powered tools to screen resumes, evaluate video interviews, assess personality traits, and rank candidates — often before a human being ever reviews an application.
The problem is that these tools can embed and amplify existing biases. AI systems learn from historical data, and if that data reflects past discriminatory practices, the AI will replicate those patterns. This is not a theoretical concern — it has been documented repeatedly:
- Resume screening algorithms have been found to penalize candidates with names associated with certain racial or ethnic backgrounds, gaps in employment history (which disproportionately affect women and caregivers), and degrees from historically Black colleges and universities
- Video interview AI that analyzes facial expressions, tone of voice, and word choice can discriminate against people with disabilities, non-native English speakers, and neurodivergent individuals whose communication styles may differ from the "ideal" patterns the AI was trained on
- Personality assessments powered by AI often screen out candidates with mental health conditions or disabilities, even when those conditions have no bearing on the individual's ability to perform the job
- Automated job advertising has been shown to target job postings based on age and gender, effectively excluding qualified candidates from even seeing opportunities
The fundamental issue is that AI bias is often invisible. A candidate who is rejected by an algorithm may never know that AI was involved in the decision, let alone that the AI's decision was influenced by discriminatory patterns.
Automated Performance Monitoring and Workplace Surveillance
Beyond hiring, AI is increasingly used to monitor and evaluate employees on the job. The rise of remote work during and after the pandemic accelerated the adoption of employee monitoring software, and these tools have only become more sophisticated in 2026.
Common forms of AI-powered workplace surveillance include:
- Keystroke logging and screen monitoring: Software that tracks every keystroke, takes periodic screenshots, and measures "active" versus "idle" time on company computers
- Email and messaging analysis: AI that scans employee communications for sentiment, productivity indicators, or policy violations
- Location tracking: GPS monitoring of company vehicles and mobile devices, often extending beyond work hours
- Biometric monitoring: Systems that use facial recognition, eye tracking, or wearable devices to monitor employee attention, stress levels, and physical activity
- Productivity scoring: Algorithms that aggregate multiple data points to generate a "productivity score" for each employee, which is then used in performance reviews and termination decisions
These monitoring systems raise serious concerns about worker privacy, dignity, and fairness. Employees may feel pressured to work through breaks, avoid using the restroom, or suppress natural behaviors to maintain a high "productivity score." The psychological toll of constant surveillance can be immense, contributing to burnout, anxiety, and a hostile work environment.
AI-Driven Termination Decisions
Perhaps the most consequential use of AI in employment is in termination decisions. Some companies are using algorithms to identify "low performers" or "flight risks" and automatically flagging or even initiating termination proceedings without meaningful human oversight.
This is deeply problematic for several reasons:
- Lack of context: AI systems cannot account for the full range of factors that affect an employee's performance — a family emergency, a medical condition, a hostile manager, or systemic issues within the company
- Reinforcement of bias: If an AI system uses biased performance metrics or surveillance data, it will disproportionately flag employees from protected groups for termination
- Due process concerns: Employees terminated based on algorithmic decisions often have no meaningful opportunity to understand or challenge the basis for the decision
- Pretextual terminations: Employers can use AI-generated "performance data" as a pretext for terminations that are actually motivated by discrimination or retaliation
If you have been terminated and suspect that an AI system played a role in the decision, it is critical to consult with an experienced wrongful termination attorney who understands these emerging issues.
California's Proposed AI Regulations
California has been at the forefront of efforts to regulate AI in the workplace. Several key legislative and regulatory developments are shaping the legal landscape in 2026:
- AB 2930 (Automated Decision Tools): This bill, which has been the subject of extensive debate, would require employers to conduct impact assessments before deploying automated decision tools that make or substantially influence consequential decisions about employees and job applicants. It would also require employers to provide notice and an opportunity for affected individuals to request human review
- California Privacy Rights Act (CPRA): The CPRA, which took full effect in 2023, gives California residents the right to opt out of automated decision-making technology. This includes AI-powered hiring and performance evaluation tools. The California Privacy Protection Agency has been developing detailed regulations on how this right applies in employment contexts
- SB 1047 (AI Safety): While primarily focused on AI safety broadly, this legislation has implications for employment by establishing standards for AI transparency and accountability that could affect workplace AI deployments
- Local ordinances: Several California cities, following New York City's lead with Local Law 144, have enacted or are considering local ordinances requiring bias audits for automated employment decision tools
These regulations are evolving rapidly, and employers are still figuring out how to comply. Workers who believe they have been harmed by AI-powered employment decisions should not wait for the regulatory landscape to settle — existing anti-discrimination laws already provide powerful protections.
EEOC Guidance on AI and Employment Discrimination
The U.S. Equal Employment Opportunity Commission (EEOC) has issued important guidance on how existing federal anti-discrimination laws apply to AI-powered employment tools. Key points include:
- Employers are liable for AI bias. Even if an employer purchases an AI tool from a third-party vendor, the employer is responsible for ensuring the tool does not discriminate. "The vendor told us it was unbiased" is not a defense
- Disparate impact applies to algorithms. Under Title VII of the Civil Rights Act, employment practices that have a disproportionate negative impact on protected groups are unlawful, regardless of the employer's intent. This applies fully to AI-powered tools
- The ADA applies to AI hiring tools. AI assessments that screen out individuals with disabilities may violate the Americans with Disabilities Act unless the employer can show the assessment is job-related and consistent with business necessity, and that reasonable accommodations were offered
- Employers must provide notice. In many contexts, employers must inform candidates and employees when AI tools are being used to make employment decisions
How AI Intersects with California Employment Law
California's Fair Employment and Housing Act (FEHA) provides even broader protections than federal law, and these protections apply fully to AI-powered employment decisions. Here's how existing California law addresses AI-related issues:
- FEHA covers all protected categories. If an AI tool disproportionately screens out or disadvantages employees or applicants based on race, gender, age, disability, sexual orientation, national origin, or any other FEHA-protected characteristic, it violates California law
- Failure to accommodate. If an AI-powered assessment or monitoring system does not accommodate an employee's disability, the employer may be liable under FEHA's reasonable accommodation requirements
- Retaliation protections. Employees who raise concerns about AI-driven discrimination or surveillance practices are protected from retaliation under California law
- Privacy protections. The California Constitution includes an explicit right to privacy, which can be implicated by intrusive AI monitoring in the workplace. Additionally, California's wiretapping and electronic surveillance laws may limit certain forms of AI-powered employee monitoring
The intersection of AI and employment law is complex, but the core principle is straightforward: employers cannot use technology to do what they are prohibited from doing directly. If it would be illegal for a human manager to discriminate based on race, it is equally illegal for an algorithm to do so.
Worker Privacy Concerns in the AI Era
The expansion of AI-powered workplace monitoring has created unprecedented threats to worker privacy. In California, employees have stronger privacy protections than in most states, but the law is still catching up to the technology. Key concerns include:
- Off-duty monitoring: Some AI-powered tools track employees beyond work hours through company-issued devices or applications. California law protects employees' lawful off-duty conduct, and employers who monitor personal activities may face liability
- Biometric data collection: Facial recognition and other biometric monitoring systems raise serious privacy issues. While California does not yet have a comprehensive biometric privacy law like Illinois' BIPA, the CPRA provides some protections for biometric data
- Emotional surveillance: AI tools that attempt to detect employees' emotional states through facial expressions, voice analysis, or physiological data are among the most invasive forms of workplace monitoring and are facing increasing legal scrutiny
- Data security: The vast amounts of employee data collected by AI monitoring systems create significant data breach risks, potentially exposing sensitive personal information
What Employees Can Do If They Suspect AI-Based Discrimination
If you believe that an AI-powered tool has been used to discriminate against you in hiring, performance evaluation, or termination, here are the steps you should take:
- Document everything. Save any communications about the AI tools used by your employer, any notices you received about automated decision-making, and any evidence of adverse treatment
- Request information. Under the CPRA, California residents have the right to request information about the personal data collected about them and how it is used. This can help uncover whether and how AI tools were involved in employment decisions
- Request human review. If you were subject to an automated decision, you may have the right to request that a human being review the decision. Exercise this right promptly
- File a complaint. You can file a complaint with the California Civil Rights Department (CRD) for discrimination claims, or the California Privacy Protection Agency for privacy violations related to AI
- Consult an employment attorney. AI discrimination cases are complex and require attorneys who understand both employment law and the technology involved. An experienced firm can help you identify the right legal theories and build a compelling case
Looking Ahead: The Future of AI and Employment Law
The use of AI in the workplace is only going to increase. As these tools become more powerful and more pervasive, the legal framework governing their use will continue to evolve. Workers should stay informed about their rights and be vigilant about how AI is affecting their employment.
At Zaghi & Chrzan, LLP, we believe that technology should not be used as a shield for discrimination or a tool for depriving workers of their rights. We are committed to staying at the forefront of these issues and advocating for fair treatment of all employees, regardless of whether decisions are made by humans or algorithms.