News

Ruturaj Gaikwad names CSK teammate as his preferred all‑rounder in candid interview

The landscape of artificial intelligence regulation in the United States has shifted dramatically as multiple states enacted comprehensive statutes that move enforcement from abstract policy statements to concrete runtime behavior. California, New York, Colorado and Texas each introduced distinct frameworks that together create a mosaic of safety, disclosure and anti‑discrimination obligations for developers and operators of AI systems.

California’s Senate Bill 243 establishes three core duties for conversational AI. First, operators must provide continuous disclosure that users are interacting with a machine, not a human. The law requires repeated reminders throughout a session whenever a reasonable person could be misled, and it adds specific prompts for minors to encourage periodic breaks. Second, the bill mandates self‑harm intervention protocols: AI must detect expressions of suicidal intent, halt harmful dialogue, and present users with crisis‑support resources. Third, starting in 2027, providers must submit periodic reports describing how often safeguards were triggered and how they performed. The statute also creates a private right of action, giving individuals the ability to sue for non‑compliance.

In parallel, California’s Assembly Bill 489 targets health‑related AI claims. It bars any system from presenting itself as “doctor‑level” or otherwise implying medical expertise unless the claim is verifiable. Violations can trigger action by professional licensing boards, extending enforcement beyond civil courts.

New York’s Responsible Artificial Intelligence Safety and Education (RAISE) Act takes a broader view of risk by focusing on frontier models. Developers are required to conduct annual safety audits, engage independent third‑party reviewers, and publish redacted safety protocols. If a model poses an “unreasonable risk of critical harm” — defined as causing death or serious injury to 100 or more people, incurring at least $1 billion in damage, or enabling the creation of weapons of mass destruction — the law forbids its deployment. Any safety incident must be reported within 72 hours, and the state may impose fines for non‑reporting.

Colorado’s AI Act, effective June 30 2026, emphasizes algorithmic discrimination. Both developers and deployers of high‑risk AI must exercise reasonable care to prevent discriminatory outcomes in areas such as housing, employment and credit. The statute requires documentation of risk assessments and provides a pathway for consumers to bring private actions when discriminatory impacts are identified.

Texas’s Responsible Artificial Intelligence Governance Act (TRAIGA) bans AI systems designed to incite self‑harm, produce unlawful deepfakes, or facilitate unlawful discrimination. The law also obliges government agencies and healthcare providers to disclose AI use to the public, ensuring transparency in public‑sector interactions.

A federal executive order issued in January 2026 adds another layer of complexity. The order tasks the Secretary of Commerce with evaluating state AI statutes for conflicts with national policy, with a deadline of March 11 2026. It signals that requirements compelling AI to alter truthful outputs or infringe on First‑Amendment rights may be preempted, while preserving state authority over child safety, AI infrastructure permitting and government procurement. Consequently, states must anticipate potential revisions to their laws based on the forthcoming federal guidance.

Collectively, these statutes represent a paradigm shift from compliance binders to runtime control. Regulators now focus on what the AI actually says at the moment of interaction. Companies can meet many obligations by implementing interception layers that flag or modify unsafe, misleading, or non‑compliant outputs before they reach users, rather than rebuilding models from scratch. This approach aligns with industry commentary that compliance “requires control at runtime — essentially, the ability to intercept unsafe, misleading or noncompliant outputs before they reach users.”

Internationally, the European Union’s AI Act entered phased implementation in August 2025, imposing transparency duties on general‑purpose AI (GPAI) providers, including detailed disclosures of training data. For multinational firms, aligning EU GPAI obligations with U.S. state requirements adds another dimension to compliance planning, especially where disclosure formats differ.

In practice, organizations must craft layered governance programs that address continuous disclosure, self‑harm detection, health‑claim accuracy, discrimination risk assessments, and critical‑harm safeguards. They must also monitor the federal preemption evaluation for changes that could nullify or modify state mandates. Effective compliance will hinge on flexible, technology‑driven controls that can be updated as the regulatory environment evolves.