InfoLawGroup LLP

View Original

Let’s Recap: California’s New AI Laws

by: Dhara Shah

The end of California’s legislative session brings a series of new AI-related bills that currently await Governor Newsom’s signature. If passed, these bills’ requirements will expand on the existing requirements for both developing and deploying GenAI systems that we have seen under global AI laws and guidance, including Colorado’s AI law.

While we highlight five generative AI (“GenAI”) bills that will have the broadest impact; note that there are a handful of additional bills that were passed. At a high-level, these other bills, if signed into law, would place requirements when using AI: to create non-consensual digital replicas of the deceased; in relation to elections and political ads; to provide health insurance decisions; to communicate with healthcare providers’ patients; and for use of AI in the public sector.

Below, we discuss 5 of the key AI bills that will impose various disclosure, development, and testing requirements on developers and deployers of GenAI systems.

1.     SB 942 – The California AI Transparency Act

Applies to: The CA AI Transparency Act applies to “covered providers,” which means a person that creates, codes, or otherwise produces a GenAI system that has over 1,000,000 monthly visitors or users and is publicly accessible in California.

Key Requirements: The Act places a variety of requirements on covered providers, including to:

a.     Create a free, publicly available AI detection tool. This tool must:

  • Allow a user to upload content or provide a URL to online content in order to assess whether the content was made or modified using the covered provider’s GenAI system.

  • Output system provenance data (and not personal provenance data) that is detected in the content. Meaning, at a high-level, data that helps verify the content’s authenticity, origin, or modification.

  • Support an API that allows user to invoke the AI detection tool without visiting the provider’s website.

b.     Offer users the option to have a clear, conspicuous, and appropriate disclosure that identifies content as AI generated. The disclosure should be permanent or extraordinary difficult to remove.

c.      Include a disclosure in the GenAI generated content created using its system that is detectable by its AI detection tool, is consistent with widely accepted industry standards, and is permanent or extraordinary difficult to remove. The disclosure should include:

  • Name of the covered provider;

  • Name and version number of the GenAI system that created/altered the content;

  • Time and date of content’s creation/alteration;

  • Unique identifier.

d.     Enter into a contract with any third parties it licenses its GenAI system to. The contract must obligate the third party to maintain the system’s ability to include a disclosure as noted above, or face revocation of the license.

2.     AB 2013 – AI Training Data Transparency

Applies to: AB 2013 applies to “developers,” a person, partnership, state or local government agency, or corporation that designs, codes, produces, or substantially modifies an artificial intelligence system or service for use by members of the public. “Substantial modification” of a GenAI system means a new version, new release, or other update to a GenAI system or service that materially changes its functionality or performance, including the results of retraining or fine tuning.

In certain instances, the requirements of this bill can thus extend to businesses that use a third party GenAI system and use its own data to further enhance the system.  

Key Requirements: AB 2013 requires developers to post documentation on its website explaining the data used to train its GenAI system or service. This documentation should include, but is not limited to:

a.     Sources and owners of the datasets,

b.     Description of how datasets further the intended purpose of the AI system,

c.      Number of data points included in datasets,

d.     Description of types of data point in data sets (types of labels/general characteristics),

e.     Whether the dataset includes data protected by IP or are just in the public domain,

f.      Whether datasets were purchases/licensed,

g.     Whether datasets include personal information,

h.     Whether datasets include aggregate consumer information,

i.       Whether there was cleaning/processing/modification of datasets,

j.       Time period the datasets were collects (or if it is ongoing),

k.     Dates the datasets were first used during developing of the AI system,

l.       Whether the AI system continuously uses synthetic data generation in its development.

3.     AB 2905 – AI Voice Amendment to Public Utilities Code

Applies to: AB 2905 applies to phone calls placed using an automatic dialing-announcing device and extends requirements when such calls are created using an “artificial voice,” which means a voice that is generated or significantly altered using artificial intelligence.

Key Requirements: Specifically, the bill requires that such phone calls may only occur with an announcement that first:

a.     States the nature of the call and the name, address, and phone number of the business;

b.     Obtains consent from the user to hear the prerecorded message; and

c.      Where applicable, informs the user that the prerecorded message uses an artificial voice.

4.     AB 2602 – AI Digital Replica Amendment to Labor Code

Applies to: AB 2602 applies to agreements for the performance of personal or professional services where the agreement permits performance by a digital replica of the individual.

Key Requirements: Specifically, the bill notes that such an agreement is unenforceable if:

a.     The agreement allows for the creation and use of a digital replica of the individual’s voice or likeness in place of work the individual would have otherwise performed in person;

b.     The agreement does not include a reasonably specific description of the intended uses of the digital replica; and

c.      The individual was not represented by legal counsel or by a labor union when negotiating the agreement.

5.     SB 1047 – Safe and Secure Innovation for Frontier AI Act

Applies to: SB 1047 applies to “developers,” a person that performs the initial training of a covered model either by training a model using a sufficient quantity of computing power and cost, or by fine-tuning an existing covered model or covered model derivative using a quantity of computing power and cost greater than the amount specified under the bill.  A covered model includes an AI model trained using a quantity of computing power greater than 10^26 integer or floating-point operations (cost >$100 million); or an AI model created using by fine-tuning a covered model using a quantity of computing power equal or greater than three times 10^25 integer or floating-point operations (cost >$10 million). As noted above for AB 2013, in certain instances this can extend to businesses using a third party GenAI system in which they use their own data to further enhance the system and the system constitutes a “covered model” under this bill.

SB 1047 may ring a bell as it has been living in the headlines and it remains unclear on whether it will be signed by the governor or vetoed. If signed, SB 1047 will require developers of AI systems to implement various practices before training the AI system and after deploying the system in order to prevent noted security and related risks. While we will do a deeper dive once, and if, the bill in signed into law, here are a few key requirements:

Key Requirements:

Before training the AI model, developers must:

  • Implement reasonable administrative, technical, and physical security protections;

  • Implement the capability to promptly enact a full shutdown of the AI system;

  • Implement and publish a written safety and security protocol, that meets specific requirements; and

  • Conduct annual reviews of safety and security protocol, amongst other requirements.

Before using the AI system, developers must:

  • Assess whether the system is reasonably capable of causing critical harm and record and retain these test results;

  • Implement safeguards to protect against enabling critical harms; and

  • Take care to ensure system’s action can be accurately and reliably attributed to it, amongst other requirements.

The bill also requires retention of a third-party auditor to ensure compliance with the law and publish a redacted copy of the report. As well as requiring certain reporting obligations, including to report safety incidents affecting the covered model to the AG.

So, What Does This All Mean For My Business?: The underlying concepts here have not changed – from day one we have been advising clients on ensuring that the GenAI systems they are developing or deploying are trained using proper data, have been tested to ensure they work as intended, and are monitored for ongoing issues. As the legal landscape surrounding GenAI continues to evolve, it becomes increasingly important to vet every GenAI system that is being used internally by your employees and that is externally being integrated into your products and services. This can be done through the creation of an “AI by design” process, including to implement internal AI usage policies, employee AI training, and AI checklists.

Originally published by InfoLawGroup LLP. If you would like to receive regular emails from us, in which we share updates and our take on current legal news, please subscribe to InfoLawGroup’s Insights HERE.