California Proposes AB 1018 to Regulate AI in Jobs, Healthcare and Essential Services
Image Credit: Jacky Lee
On February 20, 2024, California Assembly member Rebecca Bauer-Kahan introduced Assembly Bill 1018 (AB 1018), also known as the Automated Decisions Safety Act. This legislation aims to regulate artificial intelligence systems—specifically automated decision systems (ADS)—used in critical sectors such as employment, healthcare, housing, education, finance, criminal justice, and essential services. The bill seeks to ensure transparency, fairness, and accountability, addressing growing concerns about bias and errors in AI that can significantly affect people’s lives.
[Read More: California’s AI Safety Bill Veto: Innovation at Risk or Necessary Step for Progress?]
Key Provisions: Developer and Deployer Responsibilities
AB 1018 targets “covered ADS”, defined as AI systems that use machine learning, statistical modeling, or data analytics to generate outcomes—like scores, classifications, or recommendations—that influence high-impact decisions.
For developers, the bill mandates pre-deployment evaluations to ensure compliance with anti-discrimination laws and to test for biases or inaccuracies.
For deployers (e.g., employers, healthcare providers), there are obligations for ongoing audits, including independent third-party evaluations, to ensure AI systems operate fairly and reliably.
These measures aim to prevent harm and misuse while promoting accountability throughout the AI lifecycle.
Transparency, Opt-Out Rights, and User Protections
Starting January 1, 2027, organizations must:
Notify individuals when an ADS is used in decisions affecting them (e.g., job screening, treatment plans).
Explain how the AI system reached its decision and what personal data was used.
Provide individuals the right to opt out and request human review in critical areas like employment or healthcare.
Allow individuals to correct inaccurate data and appeal AI-driven decisions.
These provisions are designed to empower individuals and give them control over AI’s role in their lives.
Regulatory Oversight and Enforcement
The bill authorizes public agencies such as the California Attorney General and Civil Rights Department to pursue civil actions against non-compliant entities. Developers, deployers, or third-party auditors must submit unredacted performance evaluations and impact assessments within 30 days of an official request. These documents are exempt from the California Public Records Act to protect proprietary information.
Additionally, deployers must establish a governance program with designated personnel to manage compliance and address AI-related risks promptly.
[Read More: Apple Faces Federal Lawsuit Over iPhone 16 AI Advertising Claims]
Sector-Specific Impacts
Employment: AB 1018 takes direct aim at AI used in hiring, promotion, and performance evaluations. Starting in 2027, employers must disclose when AI is involved in these decisions and offer employees or applicants the right to request human review.
Healthcare: In healthcare, the bill complements existing laws like AB 3030, which requires disclaimers for AI-generated medical messages. It ensures that patients are notified when AI informs care decisions and can request human oversight, mitigating risks of biased or inaccurate diagnoses or treatment paths.
[Read More: Charting the AI App Landscape: What's Hot and What's Not in Generative Tech]
Legislative Status and Timeline
AB 1018 is still moving through the legislative process. Following recent amendments, it was re-referred to the Assembly Committee on Privacy and Consumer Protection. A scheduled hearing for March 28, 2025, was canceled at the author’s request—suggesting continued revisions or strategic timing. If passed, the bill could reach Governor Newsom’s desk by late September 2025, with core provisions taking effect on January 1, 2027.
How AB 1018 Compares Nationally and Globally
Compared to other U.S. laws, such as Colorado’s AI Act or Illinois’s Video Interview Act, AB 1018 is far more expansive. While others focus narrowly on developer responsibilities or specific use cases, California’s bill regulates both developers and deployers across multiple sectors and includes rigorous requirements for audits, transparency, and user rights. It aligns with the European Union’s AI Act in its focus on high-risk systems, but remains tailored to California’s legal framework and existing privacy protections.
[Read More: Superintelligence: Is Humanity's Future Shaped by AI Risks and Ambitions?]
Opposition and Challenges Ahead
Despite broad public support, the bill faces pushback from business groups like the Bay Area Council, which opposed similar past bills (e.g., AB 331 and AB 2930), citing concerns over cost, complexity, and innovation stifling. The Society for Human Resource Management (SHRM) also argues that mandatory audits and full transparency could disproportionately affect small and mid-sized companies, advocating for risk-based, contextual evaluations.
On the other hand, consumer and labour advocacy groups such as TechEquity Collaborative and SEIU California contend that these protections are essential to prevent discrimination and harm caused by unchecked AI systems.
[Read More: AI Traffic Tech Slashes Emergency Response Times in U.S. Cities with LYT's Smart System]
National and Federal Implications
AB 1018 could become a model for other states and increase pressure for a coherent federal AI regulatory strategy, especially in contrast with the Trump administration’s lighter-touch approach, as outlined in its February 2025 Executive Order. While preemption remains unlikely without federal legislation, California’s action will likely influence future discussions and set important legal precedents.
[Read More: Trump vs. Biden: The Battle Over U.S. AI Governance]