California Faces Rising Tensions Over AI Regulation Amid Trump’s Threats
This year, many of the world’s most powerful artificial intelligence companies face a pitched battle over government regulation on their home turf—California.
Even President Donald Trump’s threat to punish states that regulate AI may not stop the fight. California lawmakers, dominated by Democrats, are determined to place guardrails on the homegrown industry, arguing that unfettered AI poses a mental health risk to both children and adults. They dismiss Trump’s executive order from December, which aims to withhold federal funding from states adopting AI rules, asserting that it is their responsibility to act. While companies have launched a lobbying blitz to block what they deem onerous regulations, legislators remain undeterred.
Related: Chatbot Grok Makes Sexual Images of Kids as Users Test AI Guardrails
“They’ve spent a lot of money, and their influence is real,” said Assemblymember Rebecca Bauer-Kahan. “When the harms become so salient, my colleagues are there to serve the people, and that noise doesn’t stop us.” She plans to reintroduce a bill in January that would bar minors from using “companion” chatbots that form human-like relationships, legislation that Governor Gavin Newsom vetoed last year.
Legislators in both blue and red states are pushing back against Trump on AI, as reports increase of children and adults developing unhealthy or even delusional attachments to chatbots. California has a history of passing legislation that serves as a blueprint for other states. For instance, in September, Newsom signed a bill requiring AI developers to disclose safety protocols, which New York lawmakers are now using to shape their own regulations.

However, the push for AI regulations in California faces a stark economic reality. The industry has become a significant source of tax revenue for the state’s approximately $320 billion budget. The Silicon Valley giants targeted by lawmakers boast a combined market value exceeding $15 trillion. AI-driven income tax withholdings from companies like Apple Inc., Nvidia Corp., and Alphabet Inc. now contribute an estimated $10 billion to the state treasury—considered the “lone bright spot” in an otherwise grim fiscal forecast, according to California’s nonpartisan Legislative Analyst’s Office.
“Where do you think this money comes from?” Newsom remarked in an interview with Bloomberg Businessweek.
This situation places Newsom, a Democrat, in a challenging position. As he weighs a run for the White House in 2028, anxious parents represent a potent voting bloc. Yet industry representatives caution that AI companies could relocate if faced with stringent regulations, following a recent corporate exodus from California that includes Hewlett Packard Enterprise Co., Oracle Corp., and Tesla Inc.
Related: What to Know About Trump’s Executive Order to Curtail State AI Regulations
“It’s the ace up their sleeves,” said Catherine Bracy, CEO of TechEquity, a nonprofit advocating for AI safeguards. “And they play that card a lot.”
Consequently, legislators, lobbyists, and AI entrepreneurs are engaged in intense negotiations regarding children’s access to technology and the use of copyrighted material in AI training. California voters may also have a say, as child advocacy group Common Sense Media is gathering signatures for a proposed state ballot initiative to limit underage chatbot use. OpenAI is also looking to place a competing measure on the November ballot. Each side must collect over half a million valid signatures by June to qualify.
Common Sense’s founder, Jim Steyer, has been in discussions with OpenAI executives, seeking a compromise. Steyer, whose billionaire brother Tom is running for governor to replace the termed-out Newsom, describes California as the “de facto center of important regulation of Big Tech.”
“We appeal to their better instincts as parents, as citizens, and as good people,” Jim Steyer stated. “At the end of the day, we believe that California will set the standard for the rest of the nation. We think we have the momentum, and the public agrees with us.”
OpenAI, in a statement, indicated that the company is “exploring additional ways to strengthen teens’ safety protections, including age prediction and more parental controls, building on California’s existing standards.”
Tech companies, recognizing the stakes, are ramping up their political engagement and highlighting AI’s contributions to California’s public schools and social services. “The AI industry is not asking for tax relief or special tax handouts in Sacramento—it just wants great appreciation for the role it’s playing in powering the state’s social safety net,” said Adam Kovacevich, CEO of Chamber of Progress, a group advocating for pro-tech policies within the Democratic Party. Backed by Andreessen Horowitz, Apple, OpenAI, and others, the group is pushing back against a bill, AB 412, that would require companies to disclose which copyrighted material they used to train generative AI models. Their analysis suggests the legislation could cost the state at least $381 million in lost revenue, a concern echoed by Hollywood unions seeking protections for artists and writers.
Meta Platforms Inc. has also seeded a super PAC focused on the state’s AI sector with $20 million. The California Chamber of Commerce, representing many major tech firms, warns that overly restrictive rules banning minors’ access to chatbots could “easily push the AI industry out of California.”
As legislators reconvene in Sacramento on January 5, they insist that safety measures won’t stifle AI innovation. They view their role as establishing sensible ground rules for a rapidly growing, globally significant industry. For instance, one bill set to be amended in January—SB 300—aims to ensure chatbots do not provide sexually explicit content to minors. Another proposal, to be introduced this month, would impose a four-year moratorium on the sale of AI chatbot-powered toys to minors.
“I remember the auto industry claiming that airbags and seatbelts would end the industry and stifle productivity and innovation,” remarked Senator Steve Padilla, who authored legislation signed into law by Newsom last year requiring companies to inform users that they are interacting with artificial intelligence and report suicidal behavior. “I think most people understand that there is a responsible, reasonable place for appropriate safeguards to protect everyone’s best interests.”
Top photo: California Gov. Gavin Newsom. Photographer: Michael M. Santiago/Getty Images.
Topics
California
InsurTech
Legislation
Data Driven
Artificial Intelligence
This year, many of the world’s most powerful artificial intelligence companies face a pitched battle over government regulation on their home turf—California.
Even President Donald Trump’s threat to punish states that regulate AI may not stop the fight. California lawmakers, dominated by Democrats, are determined to place guardrails on the homegrown industry, arguing that unfettered AI poses a mental health risk to both children and adults. They dismiss Trump’s executive order from December, which aims to withhold federal funding from states adopting AI rules, asserting that it is their responsibility to act. While companies have launched a lobbying blitz to block what they deem onerous regulations, legislators remain undeterred.
Related: Chatbot Grok Makes Sexual Images of Kids as Users Test AI Guardrails
“They’ve spent a lot of money, and their influence is real,” said Assemblymember Rebecca Bauer-Kahan. “When the harms become so salient, my colleagues are there to serve the people, and that noise doesn’t stop us.” She plans to reintroduce a bill in January that would bar minors from using “companion” chatbots that form human-like relationships, legislation that Governor Gavin Newsom vetoed last year.
Legislators in both blue and red states are pushing back against Trump on AI, as reports increase of children and adults developing unhealthy or even delusional attachments to chatbots. California has a history of passing legislation that serves as a blueprint for other states. For instance, in September, Newsom signed a bill requiring AI developers to disclose safety protocols, which New York lawmakers are now using to shape their own regulations.

However, the push for AI regulations in California faces a stark economic reality. The industry has become a significant source of tax revenue for the state’s approximately $320 billion budget. The Silicon Valley giants targeted by lawmakers boast a combined market value exceeding $15 trillion. AI-driven income tax withholdings from companies like Apple Inc., Nvidia Corp., and Alphabet Inc. now contribute an estimated $10 billion to the state treasury—considered the “lone bright spot” in an otherwise grim fiscal forecast, according to California’s nonpartisan Legislative Analyst’s Office.
“Where do you think this money comes from?” Newsom remarked in an interview with Bloomberg Businessweek.
This situation places Newsom, a Democrat, in a challenging position. As he weighs a run for the White House in 2028, anxious parents represent a potent voting bloc. Yet industry representatives caution that AI companies could relocate if faced with stringent regulations, following a recent corporate exodus from California that includes Hewlett Packard Enterprise Co., Oracle Corp., and Tesla Inc.
Related: What to Know About Trump’s Executive Order to Curtail State AI Regulations
“It’s the ace up their sleeves,” said Catherine Bracy, CEO of TechEquity, a nonprofit advocating for AI safeguards. “And they play that card a lot.”
Consequently, legislators, lobbyists, and AI entrepreneurs are engaged in intense negotiations regarding children’s access to technology and the use of copyrighted material in AI training. California voters may also have a say, as child advocacy group Common Sense Media is gathering signatures for a proposed state ballot initiative to limit underage chatbot use. OpenAI is also looking to place a competing measure on the November ballot. Each side must collect over half a million valid signatures by June to qualify.
Common Sense’s founder, Jim Steyer, has been in discussions with OpenAI executives, seeking a compromise. Steyer, whose billionaire brother Tom is running for governor to replace the termed-out Newsom, describes California as the “de facto center of important regulation of Big Tech.”
“We appeal to their better instincts as parents, as citizens, and as good people,” Jim Steyer stated. “At the end of the day, we believe that California will set the standard for the rest of the nation. We think we have the momentum, and the public agrees with us.”
OpenAI, in a statement, indicated that the company is “exploring additional ways to strengthen teens’ safety protections, including age prediction and more parental controls, building on California’s existing standards.”
Tech companies, recognizing the stakes, are ramping up their political engagement and highlighting AI’s contributions to California’s public schools and social services. “The AI industry is not asking for tax relief or special tax handouts in Sacramento—it just wants great appreciation for the role it’s playing in powering the state’s social safety net,” said Adam Kovacevich, CEO of Chamber of Progress, a group advocating for pro-tech policies within the Democratic Party. Backed by Andreessen Horowitz, Apple, OpenAI, and others, the group is pushing back against a bill, AB 412, that would require companies to disclose which copyrighted material they used to train generative AI models. Their analysis suggests the legislation could cost the state at least $381 million in lost revenue, a concern echoed by Hollywood unions seeking protections for artists and writers.
Meta Platforms Inc. has also seeded a super PAC focused on the state’s AI sector with $20 million. The California Chamber of Commerce, representing many major tech firms, warns that overly restrictive rules banning minors’ access to chatbots could “easily push the AI industry out of California.”
As legislators reconvene in Sacramento on January 5, they insist that safety measures won’t stifle AI innovation. They view their role as establishing sensible ground rules for a rapidly growing, globally significant industry. For instance, one bill set to be amended in January—SB 300—aims to ensure chatbots do not provide sexually explicit content to minors. Another proposal, to be introduced this month, would impose a four-year moratorium on the sale of AI chatbot-powered toys to minors.
“I remember the auto industry claiming that airbags and seatbelts would end the industry and stifle productivity and innovation,” remarked Senator Steve Padilla, who authored legislation signed into law by Newsom last year requiring companies to inform users that they are interacting with artificial intelligence and report suicidal behavior. “I think most people understand that there is a responsible, reasonable place for appropriate safeguards to protect everyone’s best interests.”
Top photo: California Gov. Gavin Newsom. Photographer: Michael M. Santiago/Getty Images.
Topics
California
InsurTech
Legislation
Data Driven
Artificial Intelligence
