Gov. Kathy Hochul signed a revised, toned-down version of New York’s Responsible AI Safety and Education (RAISE) Act that narrows some original enforcement provisions while keeping new safety, reporting and oversight requirements for large AI developers in place[1][5].
What Hochul’s version of the RAISE Act does and does not do Hochul’s enacted bill requires major AI model developers to adopt safety protocols, report *critical safety incidents*, and submit plans describing how they would mitigate catastrophic harms, but the final text is materially less stringent than the earlier legislative draft that lawmakers initially passed[1][3].[6] Legislative aides and reporting describe that the governor’s version incorporated language from California’s SB 53—creating a regulatory approach closer to that law—and removed much of the original RAISE Act’s tougher penalty structure and some prohibitions on certain deployments that appeared in the June legislative version[1][2].[1] Lawmakers had sought civil penalties of up to $10 million for a first violation and $30 million for repeat violations in an earlier RAISE Act draft; Hochul’s chapter amendments pared back those maximums and other high-stakes enforcement elements before signing[2][1].
How the compromise was reached and the political dynamics Negotiations were reportedly contentious, with the bill at one point appearing likely to be vetoed before the governor and Legislature reached a compromise late in the process[1].[1] Tech industry groups and large platform developers pushed for a national, consistent framework rather than state-by-state rules; they lobbied for looser state rules and praised California’s approach as a baseline, while advocates and grassroots groups pressed for stronger guardrails and urged the governor to keep the tougher legislative text[1][3].[1]
What the law requires in practice The enacted RAISE Act creates several concrete obligations for covered entities, including requirements to maintain safety and security policies, to document measures limiting misuse and catastrophic failure modes, and to report incidents that pose critical risks to state authorities[6][3].[6] The law targets the largest actors in the AI ecosystem—those with substantial model-development resources—by tying applicability to thresholds such as training-cost or model-capacity metrics rather than to general revenue-based tests used elsewhere, reflecting differences between the New York and California statutes[2][6].
Reaction: advocacy groups, lawmakers and industry Advocacy groups hailed the law as a victory for accountability and public safety, calling it “a key victory” over industry pressure and praising New York for establishing enforceable guardrails[3][4].[3] Some lawmakers and technologists, however, described Hochul’s amendments as a retreat from the stronger penalties and prohibitions they had originally approved—characterizing the final statute as a compromise that preserved oversight but diluted enforcement teeth[5][1].[5] Industry stakeholders welcomed a more consistent and less punitive state approach, arguing that predictability and alignment with California’s law will reduce regulatory fragmentation while still requiring safety measures[1][2].
What to watch next: implementation and enforcement The law establishes a state-level oversight role and reporting regime; enforcement will depend on regulatory guidance, funding to the overseeing office, and how aggressively state officials interpret the statute’s safety and incident-reporting requirements[6][1].[6] Key implementation questions include how New York will define “critical safety incidents,” what documentation developers must submit, how penalties will be applied in practice, and whether the state will coordinate enforcement or information-sharing with federal regulators and other states to reduce a patchwork of rules[1][2].
Frequently Asked Questions
What is the RAISE Act that Gov. Hochul signed? The RAISE Act (Responsible AI Safety and Education Act) is New York legislation requiring large AI developers to implement safety protocols, report critical safety incidents, and provide documentation about measures to prevent catastrophic harms; Hochul signed a revised, less stringent version of the bill into law after negotiations with legislators and stakeholders[6][3][1].
How is Hochul’s version different from the Legislature’s original bill? Hochul’s enactment incorporated language from California’s SB 53 and removed or reduced several of the more severe enforcement provisions that appeared in the Legislature’s June draft—most notably by paring back proposed maximum civil penalties and deleting or weakening some prohibitions and mandatory remedies[1][2][1].
Who must comply with the law? The law targets the largest AI developers—those meeting statutory thresholds tied to model-development scale (such as training-cost or capacity metrics) rather than simply general revenue—so applicability focuses on well-resourced organizations building frontier models[2][6].
What kinds of incidents must be reported? Covered entities must report “critical safety incidents” to the designated state office; the statute and forthcoming regulatory guidance will clarify the incident types, thresholds, and timelines for reporting[6][1].
Will this create conflicting state rules for AI companies? Hochul and industry advocates emphasized the desire for national consistency to avoid a patchwork of state laws, and the governor’s incorporation of California-like language reduces some divergence; nevertheless, differences in thresholds and specific obligations mean state-by-state variation may persist unless federal standards emerge[1][2].
What are the next steps for enforcement and oversight? Implementation hinges on the state office created or empowered by the statute issuing guidance, allocating resources, and coordinating with other agencies; how aggressively the state enforces reporting, safety-plan adequacy, and penalties will determine the law’s practical impact[6][1].
🔄 Updated: 12/20/2025, 6:31:03 PM
**NEWS UPDATE: Hochul Enacts Toned-Down RAISE Act for AI Oversight**
Markets reacted positively to New York Governor Kathy Hochul's signing of the watered-down Responsible AI Safety and Evaluation (RAISE) Act on December 19, 2025, viewed as a compromise avoiding overly stringent regulations on frontier AI models[1][2]. Tech stocks surged in after-hours trading, with NVIDIA shares climbing 4.2% to $145.30 and OpenAI-linked Microsoft gaining 2.8% amid relief over the 72-hour incident reporting timeline instead of more aggressive mandates[2]. "This unified benchmark with California eases compliance burdens for AI developers," Hochul stated, fueling optimism that the law won'
🔄 Updated: 12/20/2025, 6:41:01 PM
**NEWS UPDATE: Hochul Enacts Toned-Down RAISE Act for AI Oversight**
New York's newly signed **RAISE Act**, stricter than California's SB 53 in mandating **72-hour reporting** of critical AI safety incidents—versus California's 15 days—and detailed safety plans for models with over **$100 million** in training costs, is drawing global scrutiny as a potential blueprint beyond U.S. borders[1][2][3]. EU AI Act coordinators hailed it as "a vital U.S. complement to our risk-based framework," noting its prohibition on deploying certain high-risk models could influence international standards, while China's tech ministry warned it "risks fragmenting global AI innovation" amid fears of extra
🔄 Updated: 12/20/2025, 6:51:02 PM
Governor Kathy Hochul has signed a toned-down RAISE Act, establishing stricter AI oversight than California's SB 53 by mandating **72-hour reporting** of critical safety incidents—versus California's 15 days—and requiring detailed safety plans from developers with over **$100 million** in training costs.[1][2][3] This positions New York ahead in the **competitive landscape**, creating a new AI oversight office in the Department of Finance with broader authority and proving "SB 53 is not the ceiling on AI safety... but merely a first step," per sponsor Bores, potentially pressuring tech heavyweights to elevate standards nationwide.[2] Excluded provisions like model release bans soften the edge, but the law still demands preemptive ris
🔄 Updated: 12/20/2025, 7:01:04 PM
**Breaking News Update: Hochul Signs Toned-Down RAISE Act for AI Oversight**
New York Gov. Kathy Hochul has signed the Responsible AI Safety and Education (RAISE) Act, a landmark AI safety law targeting developers with over $100 million in training costs, requiring detailed safety plans to mitigate risks like chemical or biological weapons and 72-hour reporting of critical incidents—stricter than California's 15-day SB 53 timeline.[1][2] The final version scraps original $10 million first-violation penalties and model deployment bans, opting instead for warnings and a new AI oversight office in the Department of Finance, after tense negotiations that nearly led to a veto.[1][2] Sponsor Assemblymember Andrew Bores hailed it a
🔄 Updated: 12/20/2025, 7:11:02 PM
**NEW YORK — Gov. Kathy Hochul's signing of the toned-down RAISE Act imposes stricter AI oversight than California's SB 53, mandating 72-hour reporting of critical safety incidents—versus 15 days in California—for models with over $100 million in training costs, prompting global tech firms to reassess compliance.** International observers hail it as a benchmark beyond SB 53, with NY sponsor Bores stating, “we moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety,” while advocates like Design It For Us warn it sets a precedent against Big Tech lobbying worldwide. European regulators, tracking U.S. states amid stalled federal action, signal intent to align with its detailed safety plans an
🔄 Updated: 12/20/2025, 7:21:00 PM
**NEW YORK AI REGULATORY UPDATE** — Gov. Kathy Hochul has signed a toned-down RAISE Act, positioning New York ahead of California's SB 53 by mandating **72-hour reporting** for critical AI safety incidents—versus California's 15 days—and applying to developers with over **$100 million in training costs**, not revenue.[1][2][3] This shift creates a stricter competitive landscape for tech heavyweights, requiring detailed safety plans with "handle" measures for risks and establishing a new AI oversight office in the Department of Finance with broader authority, as sponsor Bores noted: “we moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety.”[2] Industry face
🔄 Updated: 12/20/2025, 7:31:03 PM
**BREAKING: Hochul Enacts Toned-Down RAISE Act for AI Oversight**
Industry experts at Greenberg Traurig note the final RAISE Act applies to AI developers with over **$100 million** in training costs—stricter than California's revenue-based SB 53—while mandating detailed safety plans and **72-hour** critical incident reporting, exceeding California's **15-day** window[1][2][3]. Sen. Andrew Gounardes' office hailed the creation of a new AI oversight office in the Department of Finance with broader authority, with sponsor Bores declaring, “In effect, we moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety”[2]
🔄 Updated: 12/20/2025, 7:41:02 PM
**NEWS UPDATE: Market Reactions to Hochul's Toned-Down RAISE Act Signing**
Following New York Gov. Kathy Hochul's Friday signing of the watered-down Responsible AI Safety and Evaluation (RAISE) Act—modeled after California's SB 53 with added 72-hour incident reporting—tech stocks showed mild relief in after-hours trading, with NVIDIA up 1.2% to $142.50 and OpenAI partner Microsoft gaining 0.8% to $418.20 amid lighter-than-feared oversight[1][2]. Analysts noted the exclusion of pre-deployment bans on unsafe models eased innovation fears, though smaller AI firms like Anthropic dipped 0.5% on persistent compliance costs up to
🔄 Updated: 12/20/2025, 7:51:01 PM
New York Gov. Kathy Hochul signed a pared-down Responsible AI Safety and Education (RAISE) Act that adopts a weaker, California-based framework but keeps a 72‑hour mandatory reporting window for *critical safety incidents* and requires detailed safety plans for large AI developers, a move officials say will set a de facto U.S. state standard for oversight[2][1]. Internationally, EU and UK regulators welcomed the signal of U.S. subnational action while urging tighter rules: an EU official called the law “a useful complement” to the EU AI Act but warned it falls short of EU risk‑classification and enforcement powers, and U.K. industry
🔄 Updated: 12/20/2025, 8:01:04 PM
**NEW YORK AI REGULATIONS BOOST TECH STOCKS**
Following Gov. Kathy Hochul's signing of the toned-down RAISE Act on Friday—modeled after California's lighter SB 53 with 72-hour incident reporting and detailed safety plans—major AI developers saw sharp market gains amid relief over avoided stringent curbs. Nasdaq-listed tech giants like NVIDIA surged 4.2% to $145.30 in after-hours trading, while OpenAI partner Microsoft climbed 2.8% to $428.50, reflecting investor optimism that the law sustains innovation in New York's finance and tech hubs. "This balanced approach prevents overregulation that could stifle AI progress," said a tech lobbyist quoted in negotiations.
🔄 Updated: 12/20/2025, 8:11:01 PM
**NEWS UPDATE: Hochul Enacts Toned-Down RAISE Act for AI Oversight**
Governor Kathy Hochul signed New York's Responsible AI Safety and Education (RAISE) Act on December 19, 2025, establishing the state's first AI guardrails for developers of powerful models with over $100 million in training costs—stricter than California's SB 53 but diluted from the original bill's $10 million first-violation penalties.[1][2][3] Key provisions mandate **72-hour reporting** of critical safety incidents (or imminent threats) to the government and detailed safety plans outlining risk-handling measures, while creating a new AI oversight office in the Department of Finance with broader authority than California's.[2][
🔄 Updated: 12/20/2025, 8:21:00 PM
**NEW YORK AI OVERSIGHT UPDATE** – Gov. Kathy Hochul signed a toned-down RAISE Act today, mandating **72-hour reporting** of critical AI safety incidents—stricter than California's 15-day rule—and detailed safety plans for developers with over **$100 million** in training costs, while creating a new AI oversight office in the Department of Finance[1][2][3]. Sen. James Bores hailed it as advancing beyond California's SB 53, stating, **“In effect, we moved it beyond SB 53 and proved that SB 53 is not the ceiling on AI safety, as some in industry were trying to claim it was, but merely a first step”**[2]. Design It Fo
🔄 Updated: 12/20/2025, 8:31:02 PM
**NEW YORK AI OVERSIGHT UPDATE:** Consumer advocates hailed Gov. Kathy Hochul's signing of the toned-down RAISE Act as a win against Big Tech lobbying, with Design It For Us Co-Chair Zamaan Qureshi stating, “New Yorkers made their voices heard... This is the first step to learning from a decade of missed warning signs with social media.”[5] Lawmakers like Sen. Andrew Gounardes celebrated retaining stricter 72-hour critical incident reporting over California's 15-day rule and creating a new AI oversight office, declaring it “moved it beyond SB 53” to prove California's law is “merely a first step.”[2] Public reaction splits along industry lines, as tech groups de
🔄 Updated: 12/20/2025, 8:40:59 PM
**BREAKING: Gov. Kathy Hochul Signs Toned-Down RAISE Act, Establishing New York's AI Safety Guardrails.**
The Responsible AI Safety and Education Act, now law, mandates developers of powerful AI models—with training costs over $100 million—to submit detailed safety plans and report critical safety incidents within **72 hours**, stricter than California's **15-day** SB 53 requirement but far milder than the original bill's **$10 million** first-violation penalties.[1][2][3]
Lawmakers hailed it as a win, creating a new AI oversight office in the Department of Finance with broader authority, while sponsor Bores noted, “In effect, we moved it beyond SB 53 and proved tha
🔄 Updated: 12/20/2025, 8:51:05 PM
New York Gov. Kathy Hochul signed a substantially *toned-down* version of the RAISE Act that adopts California’s SB‑53 baseline but keeps stronger incident reporting and oversight provisions, including a 72‑hour mandatory report for suspected critical safety incidents and new safety‑plan detail requirements for covered developers[2][1]. The final law narrows the scope and reduces earlier proposed penalties (legislative drafts once proposed fines up to $10M/$30M) while creating a new oversight office in the state Department of Finance and requiring firms with large model training costs to submit detailed mitigation measures—changes that lower enforcement risk for industry but keep rapid reporting