top of page

INSTANT AI UPDATE 51: AI SAFETY BECOMES OPERATIONAL


INSTANT AI UPDATE 51: AI SAFETY BECOMES OPERATIONAL - AI SAFETY STANDARDS, NOT SILOS: WHAT NEW YORK'S RAISE ACT MEANS FOR ORGANIZATIONS IN 2026. Artificial intelligence is reshaping work, research, and public policy faster than most institutions can adapt. In that shifting landscape, the recent passage of the Responsible AI Safety and Education Act (RAISE Act) in New York marks a clear inflection point: governments are no longer debating whether to oversee powerful AI systems; they're defining how to do so, balancing innovation and risk. 


In December 2025, Governor Kathy Hochul signed the RAISE Act into law, creating a nation-leading framework requiring the most prominent developers of advanced AI to implement, document, and disclose robust safety and transparency protocols. 


This development is more than a matter of local regulation; it's a bellwether of how public expectations of AI accountability are evolving at a time when federal policy remains in flux.


What the RAISE Act actually requires

The RAISE Act focuses squarely on "frontier AI models," those that represent cutting-edge capability and systemic scale. The law applies to AI systems whose development involves exceptionally high compute costs (more than $100 million) and which are made available to New York residents. 

Among its key provisions:

  • Safety & security protocols: Covered developers must draft and maintain written safety and security protocols designed to prevent critical harm, meaning events that could lead to widespread injury, death of 100+ individuals, or at least $1 billion in economic damage. These protocols are reviewed internally and retained for years.

  • Public disclosure: Redacted versions of these safety protocols must be published, and unredacted versions must be made available to the New York Attorney General and the Department of Financial Services upon request.

  • Incident reporting: If a developer learns of a safety incident, for example, model behavior that materially increases risk or is exploited maliciously, it must be reported to state authorities within 72 hours of discovery.

  • Enforcement & penalties: The state Attorney General can pursue civil actions for violations, with penalties up to $1 million for a first violation and up to $3 million for subsequent violations under the signed version of the law.


These obligations align with emerging global norms on high-risk AI governance, though they are specific in the tools they apply to and in the enforcement mechanisms. 


A framework, not a backstop: implications for organizations

Situational clarity

For many organizations, from universities adopting large language models for student advising to nonprofits deploying predictive analytics, the RAISE Act underscores the need for governance frameworks that can scale beyond pilot projects.


Unlike sector-specific AI laws (e.g., in healthcare or employment), the RAISE Act:

  • Targets development and deployment risk, not just use cases.

  • Applies to frontier systems, not every AI tool.

  • Emphasizes transparency and reporting over granular use restrictions. 


This means that institutions don't suddenly face new compliance obligations unless they develop or commercially distribute frontier AI systems. But they do operate in a climate where expectations of documentation, risk assessment, and accountability are rising fast.


Pros and cons

Pros

  • Raises the floor for safety practices among the most powerful AI developers.

  • Encourages companies to adopt robust risk management frameworks that can inform enterprise governance.

  • Signals to regulators nationwide that measured transparency, not prohibition, can coexist with innovation.

Cons

  • The law's applicability only to expensive frontier models may leave gaps around impactful but lower-cost deployment systems.

  • Organizations that interface with frontier models must prepare for third-party oversight, which introduces operational complexity.

  • Without harmonized federal standards, a patchwork of state rules could create administrative overhead for multistate entities. 


What this means for education, research, and nonprofit sectors

The RAISE Act does not directly regulate higher education and research institutions unless they develop and publish qualifying frontier AI systems. However, there are critical indirect effects:


  • Vendor transparency: Academic deployments of AI systems from frontier developers (e.g., large language models) will likely benefit from enhanced safety disclosures, enabling better institutional review and risk assessment.

  • Curriculum and policy development: Universities teaching AI ethics and governance can now draw on a real-world statutory regime as a case study, bridging theory and practice.

  • Risk frameworks: Institutional review boards (IRBs), compliance committees, and data governance bodies can use the RAISE Act's structure as a template for internal policies.


Nonprofits, especially those engaging applied AI for service delivery or constituent engagement, should monitor vendor compliance and transparency reporting as part of their procurement and risk frameworks.

For all sectors, this marks a shift toward accountability mechanisms that align with operational risk management, not just theoretical best practices.


From my perspective

The RAISE Act represents a maturing regulatory stance on artificial intelligence. It doesn't clamp down on development; it institutionalizes thoughtful risk assessment and disclosure, which in turn fosters trust and broader adoption. Unlike patchwork attempts to freeze innovation, this law is an invitation for organizations to embed governance before adverse incidents occur.


Policy and practice are now converging. Organizations that proactively build AI governance frameworks grounded in transparency, documented safety protocols, and incident readiness are better positioned for the next wave of regulatory evolution.


Conclusion

AI is no longer beyond the reach of law; it is squarely within the purview of public safety and accountability frameworks. New York's RAISE Act makes that clear by requiring frontier model developers to publish safety frameworks, promptly report incidents, and face enforcement if they miss the mark. This act marks a turning point in how states influence global AI norms and offers a model for institutions to shape their own AI governance strategies with confidence.

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page