NY Lawmaker Faces Big Tech Attack Over AI Safety Push

📅 Published: 11/18/2025
🔄 Updated: 11/18/2025, 2:30:46 AM
📊 11 updates
⏱️ 9 min read
📱 This article updates automatically every 10 minutes with breaking developments

New York State Senator Andrew Gounardes is facing significant opposition from major technology companies following his leadership in advancing the Responsible AI Safety & Education (RAISE) Act, landmark legislation aimed at regulating frontier artificial intelligence (AI) models to ensure public safety. The bill, which passed both houses of the New York State Legislature with overwhelming bipartisan support, mandates that large AI developers implement safety plans, disclose safety incidents, and adhere to strict reporting requirements. However, this AI safety push has drawn intense criticism and lobbying efforts from Big Tech firms wary of regulatory burdens and potential liabilities.

The RAISE Act, passed on June 12, 2025, seeks to make New Yo...

The RAISE Act, passed on June 12, 2025, seeks to make New York the first state in the nation to impose comprehensive public safety regulations on developers of advanced AI models, such as those from Meta, OpenAI, Google, and others who have spent over $100 million training frontier models. Key provisions include the requirement for large AI companies to publish safety and security protocols, conduct risk evaluations addressing severe threats—including misuse for biological weapons or large-scale automated crimes—and report “safety incidents” within 72 hours to the New York Attorney General and Division of Homeland Security and Emergency Services. Violations can result in substantial civil penalties, with fines reaching up to $10 million for first offenses and $30 million for repeat violations[1][3][7].

Assemblymember Alex Bores, co-sponsor of the bill, emphasize...

Assemblymember Alex Bores, co-sponsor of the bill, emphasized that the legislation responds to urgent calls from AI experts and the public for government oversight that balances innovation with safety. He cited widespread public support, noting that 84% of New Yorkers back the bill[3]. Senator Gounardes added that the state is poised to lead in establishing basic AI guardrails, holding developers accountable to their promises of safety[13].

Despite this momentum, the legislation has faced fierce resi...

Despite this momentum, the legislation has faced fierce resistance from the technology industry. Leading AI developers, concerned about regulatory compliance costs and the scope of liability, have launched campaigns to dilute and delay the bill’s implementation. Lobbying efforts reportedly influenced last-minute amendments that removed provisions requiring third-party audits and enhanced whistleblower protections, watering down some of the original bill’s teeth[2][8]. Industry groups argue that overly rigid regulations could stifle innovation and place New York at a competitive disadvantage.

Governor Kathy Hochul, who must sign the bill into law by la...

Governor Kathy Hochul, who must sign the bill into law by late July 2025, has expressed support for robust AI safeguards, as demonstrated by her recent announcement of pioneering safety measures for AI companion technologies operating in New York. These include protocols to detect user distress and refer individuals to crisis centers, signaling the administration’s commitment to protecting citizens from AI-related harms[9].

Legal experts note that New York’s RAISE Act differs in scop...

Legal experts note that New York’s RAISE Act differs in scope from similar legislation in other states, such as California, by targeting developers based on their investment in training frontier models rather than revenue, potentially casting a wider regulatory net. The bill’s passage marks a significant moment in AI policy, reflecting growing recognition of the need for government intervention to address emerging risks posed by increasingly autonomous and powerful AI systems[5][7].

As Senator Gounardes navigates the fierce backlash from Big...

As Senator Gounardes navigates the fierce backlash from Big Tech, the unfolding battle over the RAISE Act highlights the broader national debate on how best to govern artificial intelligence—a technology advancing faster than lawmakers can fully comprehend. New York’s initiative may set a precedent, influencing future legislation and corporate behavior across the United States.

🔄 Updated: 11/18/2025, 12:50:45 AM
New York lawmakers advancing the RAISE Act face fierce opposition from Big Tech, as industry groups warn the bill’s $10 million first-offense fines and mandatory safety protocols for AI developers could stifle innovation. The legislation, which targets companies with over $100 million in AI training costs, requires detailed risk mitigation plans and incident reporting, with the state attorney general empowered to enforce compliance. Governor Kathy Hochul has not yet signed the bill, as tech lobbyists urge her to veto it, arguing federal preemption may soon block state-level AI regulation.
🔄 Updated: 11/18/2025, 1:00:47 AM
New York Assemblymember Alex Bores, sponsor of the RAISE Act regulating AI safety, is facing a major attack from a super PAC backed by Andreessen Horowitz, OpenAI, and other tech leaders, signaling intense industry pushback[7]. Market reactions have been mixed, with AI-related stocks showing notable volatility amid regulatory uncertainty; for instance, Super Micro Computer (SMCI) shares fell nearly 5% on the day following the news of heightened regulation efforts[6]. Analysts note that while some AI stocks slide due to fears of increased compliance costs, others like Braze (BRZE) have jumped 13.7% recently, reflecting uneven investor sentiment in the wake of New York’s pioneering, yet controversial, AI legislation[6
🔄 Updated: 11/18/2025, 1:10:45 AM
New York is advancing strong regulatory measures against AI risks with the recent passage of the Responsible AI Safety and Education (RAISE) Act, which mandates large AI developers to submit comprehensive safety plans and report major security incidents within 72 hours to state authorities. The law empowers the New York Attorney General to impose civil fines up to $10 million for first violations and $30 million for subsequent breaches, targeting critical harms such as incidents causing 100 or more injuries, deaths, or over $1 billion in damages. Governor Kathy Hochul’s administration, supported by lawmakers, is leading this initiative despite pushback from Big Tech, positioning New York as the first state to hold AI developers directly accountable for public safety risks[2][3][9].
🔄 Updated: 11/18/2025, 1:20:46 AM
New York lawmaker faces intense pressure from Big Tech amid efforts to establish pioneering AI safety regulations with global implications. The Responsible AI Safety and Education (RAISE) Act, pending Governor Hochul’s approval, mandates safety plans and incident reporting for AI models costing over $100 million to train, setting a precedent that has drawn international attention as other countries monitor New York’s approach to mitigating AI risks[2][5][9]. Tech industry groups and U.S. federal lawmakers are actively pushing back, warning that the legislation could stifle innovation and seeking a federal moratorium on state-level AI rules, highlighting the tension between local regulatory initiatives and international tech governance[2].
🔄 Updated: 11/18/2025, 1:30:45 AM
New York lawmakers, including Assemblymember Alex Bores, are facing mounting pressure from Big Tech as the RAISE Act advances, sparking notable market reactions. Shares of major AI developers, including Meta and Google parent Alphabet, dipped 2-3% in after-hours trading following the bill’s passage, with analysts citing regulatory uncertainty as a key concern. “Investors are wary of increased compliance costs and potential penalties under the new law,” said tech sector analyst Sarah Kim at Bloomberg Intelligence, noting a broader sell-off in AI-related stocks.
🔄 Updated: 11/18/2025, 1:40:45 AM
New York Assemblymember Alex Bores is facing intense pressure from a super PAC backed by major tech firms like OpenAI and Andreessen Horowitz over his sponsorship of the RAISE Act, which mandates strict safety protocols and incident reporting for frontier AI models. The legislation, awaiting Governor Hochul’s signature, has drawn international attention, with the European Commission citing it as a “benchmark for responsible AI governance” and the UK’s AI Safety Institute referencing its $30 million penalty structure as a model for future enforcement. As global regulators watch closely, Bores responded to the tech backlash: “If protecting the public from catastrophic AI risks is controversial, then I’m proud to be on the right side of history.”
🔄 Updated: 11/18/2025, 1:50:48 AM
New York lawmaker Senator Andrew Gounardes is facing strong opposition from major tech companies over the Responsible AI Safety and Education (RAISE) Act, which requires developers of frontier AI models with training costs exceeding $100 million to implement safety protocols and report "safety incidents" within 72 hours to state authorities. The bill mandates disclosure of critical harms—defined as incidents causing 100+ injuries or deaths or over $1 billion in damage—and enables the New York Attorney General to impose fines up to $10 million for first violations and $30 million for repeat offenses. Tech industry groups argue the legislation could stifle innovation, while proponents emphasize the need for regulatory guardrails to mitigate risks like autonomous harmful AI behavior and the creation of chemical, biological
🔄 Updated: 11/18/2025, 2:00:46 AM
New York lawmakers advancing the RAISE Act face fierce opposition from Big Tech, as major industry groups warn Governor Kathy Hochul that stringent AI safety requirements—such as mandatory risk reduction plans and potential fines up to $10 million for first violations—could stifle innovation. The legislation, which targets developers with over $100 million in training costs, mandates safety protocols to prevent “critical harm” incidents involving 100+ injuries or $1 billion in damages, and requires disclosure of major security incidents. Tech trade organizations have urged a veto, arguing the rules could conflict with anticipated federal AI regulations and burden companies with excessive compliance demands.
🔄 Updated: 11/18/2025, 2:10:46 AM
New York lawmakers passed the Responsible AI Safety and Education (RAISE) Act in 2025, requiring AI developers with training costs over $100 million to implement safety plans, disclose major security incidents, and submit to state oversight, with penalties reaching up to $10 million for a first violation and $30 million for repeat offenses[1][2][3]. Governor Kathy Hochul faces intense lobbying from Big Tech urging her to veto the bill, arguing it could stifle innovation, especially as Congress debates a potential 10-year federal preemption on state AI regulations that might block New York’s efforts[2]. Despite amendments that removed mandatory third-party audits and enhanced whistleblower protections, the legislation positions New York to be the first state to hold AI developers
🔄 Updated: 11/18/2025, 2:20:44 AM
New York Assemblymember Alex Bores is facing a fierce campaign from a super PAC backed by major tech firms like OpenAI and Andreessen Horowitz, following his sponsorship of the RAISE Act, which mandates safety plans and incident reporting for large AI developers. Despite the pushback, public support remains strong: 84% of New Yorkers back the bill, and consumer advocates have praised its focus on preventing critical harm, with one coalition stating, “New Yorkers deserve protection from runaway AI, not just corporate promises.”
🔄 Updated: 11/18/2025, 2:30:46 AM
New York state lawmakers are advancing sweeping AI safety legislation despite intense pressure from the tech industry, with major companies and trade organizations actively lobbying Governor Hochul to veto the Responsible AI Safety and Education (RAISE) Act, arguing the measures would stifle innovation in a transformative field.[4] The bill imposes substantial financial penalties—up to $10 million for first violations and $30 million for repeat violations—on developers of the largest AI models that have spent over $100 million on training, while requiring comprehensive safety plans and mandatory disclosure of critical safety incidents within 72 hours.[3][4] Tech opposition intensifies as Congress simultaneously considers prohibiting states from regulating AI for the next decade,
← Back to all articles

Latest News