How it all started
The Cambridge Analytica scandal in 2018, where the personal data of millions of Facebook users was harvested without consent for political advertising, served as a major catalyst for data governance laws worldwide. This incident not only accelerated discussions around privacy in the U.S. Congress but also influenced the global conversation on data protection, including AI governance.
Federal laws are still playing catchup
As of 2024, the United States still lacks comprehensive federal legislation specifically governing AI or data privacy across all sectors.
The most significant recent federal action on AI was President Biden's Executive Order on "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence" issued in October 2023. While this provides guidance for federal agencies, it is not binding legislation.
EU's AI Act
The European Union has taken a leading role in AI regulation with the AI Act, which is the world's first comprehensive AI law.
The EU AI Act is a comprehensive regulatory framework for artificial intelligence that aims to ensure AI systems in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.
Here is how the AI Act is structured
Risk-based approach
The AI Act categorizes AI systems based on their risk level and applies different regulations accordingly:
-
- Unacceptable risk: Certain AI practices deemed to pose unacceptable risks are banned. Examples include:
- Social scoring systems
- Cognitive behavioral manipulation of vulnerable groups
- Real-time remote biometric identification systems in public spaces (with some exceptions for law enforcement)
- High-risk: AI systems in areas like critical infrastructure, education, employment, and law enforcement face strict requirements, including
- Conformity assessments before market placement
- Registration in an EU database
- Human oversight
- Detailed documentation and risk management
- Limited Risk: Minimal transparency obligations. Applies to systems like chatbots and deepfakes. Users must be informed when interacting with AI or when content is AI-generated
-
- Minimal Risk: Free use allowed.
Transparency requirements for generative required that AI systems like ChatGPT must:
-
- Disclose that the content was generated by AI
- Design the model to prevent generating illegal content
- Publish summaries of copyrighted data used for training
Obligations for high-impact general-purpose AI models required that advanced models like GPT-4 must:
-
- Undergo thorough evaluations
- Report serious incidents to the European Commission
AI-generated content that is modified or generated by AI (e.g., deepfakes) must be clearly labeled as such.
Governance and enforcement
-
- The European AI Office will oversee the AI Act's implementation and enforcement
- Penalties for non-compliance can reach up to €35 million or 7% of global annual turnover, whichever is higher
US state laws
In the absence of comprehensive federal legislation, several states have enacted or proposed their own AI and data privacy laws, creating a patchwork of regulations across the country.
California, Colorado, Connecticut, and Virginia have passed comprehensive consumer privacy laws that include provisions on automated decision-making and profiling. As of August 2023, California is leading the charge in defining and enacting laws surround AI and data privacy
California was one of the first states to enact comprehensive consumer privacy laws that impact AI systems, including:
- The California Consumer Privacy Act (CCPA)
- The California Privacy Rights Act (CPRA), which expanded the CCPA
Proposed AI-Specific Legislation
As of mid-2024, California is considering several AI-related bills:
AB2930: Introduced byIntroduced by Assembly Member Bauer-Kahan on February 15, 2024, this bill aims to regulate the use of automated decision systems (ADSs) to prevent algorithmic discrimination, particularly in employment-related decisions.
SB942 (California AI Transparency Act): Introduced by Senator Beckeron January 17, 2024, this bill aims to increase transparency and accountability in the use of generative AI systems.
AB3048: Introduced by Assembly Member Lowenthal on February 16, 2024, this bill is an amendment to the CCPA 2018 regarding opt-out preferences
SB892 (Safe and secure Innovation for Frontier Artificial Intelligence Models Act): introduced by California State Sen. Scott Wiener, this measure would establish standards for AI services in public contracts
SB896 (Gen AI Accountability Act): Requires state agencies to disclose AI interactions and conduct risk evaluations
AB2013 (AI Training Data Transparency): Goal of this bill is to increase transparency around data used to train AI systems.
SB1047 (Safe and Secure Innovation for Frontier Artificial Intelligence Models Act: Regulate development and use of advanced AI models for safety and security.
AB 1008 (2024 session) : This is a new assembly bill, AB 1008 (2024 session) waiting for the Governor's signature It aims to amend the California Consumer Privacy Act of 2018 (CCPA) regarding the definition of "personal information" and "publicly available" information. The bill would specify that "publicly available" information does not include data gathered from internet websites using automated mass data extraction techniques.
Join us for the sequel of AI & The Law webinar where we will discuss some of these regulations and laws that impact how developers develop and implement AI systems considering risks, implications, privacy, rights and innovation.