News

Federal Judge Blocks New Student Loan Forgiveness Rule Pending Appeal

State governments across the United States have moved from a regulatory vacuum to a densely layered environment of AI laws that emphasize safety, transparency, nondiscrimination, and consumer protection. California leads with a suite of statutes that together create a comprehensive compliance regime, while Colorado, Texas, and Utah have adopted complementary frameworks that reinforce many of the same principles.

California’s approach is multifaceted. The AI Safety Act provides whistleblower protections and establishes a public AI cloud consortium, CalCompute, to foster responsible development. The AI Training Data and Transparency law (AB 2013) obligates providers to publish high‑level summaries of the data used to train generative models, to embed watermarks and provenance tags in AI‑generated content, and to prohibit any modification that disables these disclosures. The HR and Automated Decision Systems (ADS) statute bans discriminatory impacts on protected groups, requires reasonable accommodations, and holds employers accountable for vendor‑supplied ADS tools, while also mandating a four‑year retention period for decision‑making records.

Separate legislation targets market practices. The Common Pricing Algorithm prohibition strengthens antitrust oversight by outlawing AI‑driven price‑fixing tools. The RAISE Act (Responsible AI Safety and Education Act) imposes rigorous safety policies on developers with high training costs, and levies penalties of up to $10 million for first offenses and $30 million for repeat violations. Additional statutes require anti‑addiction labeling for platforms that employ infinite‑scroll designs, mandate disclosures for synthetic performers in advertising, and expand right‑of‑publicity protections to AI‑generated likenesses of deceased individuals.

Two groundbreaking bills, SB 243 and AB 489, shift the focus from static compliance documents to real‑time enforcement. SB 243, aimed at companion AI chatbots, demands continuous runtime disclosure that repeatedly reminds users—especially minors—that they are interacting with an artificial system. It also obliges operators to intervene when conversations turn toward self‑harm, halting harmful patterns and directing users to crisis resources. Beginning in 2027, operators must report the frequency and effectiveness of these safeguards, and a private right of action gives individuals a direct avenue for redress.

AB 489 extends similar principles to health‑adjacent AI, prohibiting any representation that suggests licensed medical expertise unless that claim is factual. Even subtle cues that could mislead a reasonable user about a system’s clinical authority are prohibited, and enforcement is coordinated with professional licensing boards.

California’s Frontier AI Accountability Act (TFAIA) targets large‑scale, high‑impact models. Developers must publish “Frontier AI Frameworks” that outline risk‑identification and mitigation strategies for catastrophic scenarios—defined as incidents causing injury to more than 50 people or exceeding $1 billion in property damage. The frameworks must align with national and international standards, include third‑party audits, and safeguard unreleased model weights. Critical incident reporting is required for any unauthorized access, loss of control, or use of deceptive methods to bypass safety controls.

Beyond California, Texas has enacted the Responsible Artificial Intelligence Governance Act (TRAIGA), which bans AI systems designed to incite self‑harm, facilitate unlawful discrimination, or generate illegal deepfakes. The law also mandates disclosures when government agencies or healthcare providers use AI that interacts directly with consumers.

Utah’s Artificial Intelligence Policy Act requires conspicuous disclosures whenever licensed professionals employ generative AI in client interactions. For high‑risk engagements involving sensitive data or significant personal decisions, the act demands mandatory verbal or electronic disclosures and explicitly prevents companies from evading liability by blaming the AI itself.

Colorado’s AI Act, effective June 30 2026, introduces a “reasonable care” standard for both developers and deployers of high‑risk systems. The statute obliges them to protect consumers from foreseeable algorithmic discrimination, though it leaves the exact metrics for compliance to be defined through future enforcement and litigation.

Collectively, these state statutes represent a philosophical shift from compliance‑by‑documentation to compliance‑by‑runtime. Regulators now require live, observable behavior—intercepting unsafe or misleading outputs before they reach users. This demands robust runtime control mechanisms, continuous monitoring, and dynamic adjustment of AI behavior as regulatory expectations evolve.

At the federal level, a December 2025 executive order has introduced tension by directing the Secretary of Commerce to identify state laws that conflict with national policy, particularly those that compel alterations to truthful AI outputs or impose disclosure regimes that may infringe on First Amendment rights. The order preserves state authority over child safety, AI infrastructure, and government procurement, suggesting that preemption concerns will focus on content‑alteration mandates while allowing states to retain power over safety‑oriented regulations.

The emerging landscape presents several unresolved questions. Courts will need to interpret “catastrophic risk” thresholds in TFAIA, define the scope of “reasonable care” in Colorado, and determine how “frequent reminders” for minors under SB 243 are quantitatively measured. The FTC’s forthcoming guidance on preemption will likely target state statutes that require AI output modifications, potentially reshaping the balance between state and federal oversight.

For organizations operating in this environment, compliance does not require rebuilding models from scratch but rather implementing runtime interception capabilities that can filter unsafe, misleading, or non‑compliant content in real time. Building such controls, documenting their effectiveness, and preparing for private rights of action will be essential steps toward meeting the rigorous standards now set by state lawmakers.