Rohit Sharma reinstated as India’s ODI captain after criticism of Shubman Gill’s tenure.
In the United States the absence of a comprehensive federal AI framework has led several states to enact their own laws that take effect at the start of 2026. These statutes focus on high‑risk and frontier artificial‑intelligence systems and impose obligations such as transparency notices, impact assessments, audit requirements and safety plans. California, Colorado, Texas and New York each introduce distinct but overlapping regimes that collectively shape a layered regulatory environment for companies operating in multiple jurisdictions.
California implements the AI Transparency Act (SB 942) which mandates that any entity providing AI‑driven services must disclose the presence of AI, describe the system’s capabilities, and maintain documentation of training data and model performance. In addition, the Frontier AI Framework under SB 53 requires developers of models that could cause catastrophic outcomes—defined as causing more than fifty injuries or exceeding one billion dollars in damage—to file safety plans, conduct regular audits, and report high‑impact incidents to state authorities. The GAI Training Data Transparency Act (AB 2013) further expands disclosure duties by obligating providers to reveal the provenance of training datasets.
Colorado adopts the AI Act (SB 205), delayed until June 30 2026, which emphasizes protection against algorithmic discrimination. High‑risk systems used in employment, credit, housing and other critical decisions must be evaluated for bias, and providers must exercise reasonable care to prevent discriminatory outcomes. The law also requires periodic reporting and the establishment of internal governance structures.
Texas introduces the Responsible AI Governance Act (TRAIGA), effective January 1 2026. TRAIGA focuses on the full lifecycle of high‑impact models, requiring developers to conduct red‑team exercises, implement continuous monitoring, and submit annual risk‑management reviews. The statute also establishes a state‑level oversight board to review compliance and enforce penalties for violations.
New York enacts the RAISE Act, which targets frontier AI models with obligations that mirror California’s safety framework. Companies must submit audited safety assessments, maintain transparency reports, and ensure that any modifications to model outputs preserve factual integrity.
A December 2025 Executive Order issued by the White House directs the Commerce Secretary to evaluate state AI laws that may conflict with a national AI policy. The order sets a deadline of March 11 2026 for identifying statutes that compel alterations to truthful AI outputs or that infringe on First Amendment rights. The Federal Trade Commission is also tasked with issuing an AI policy statement to guide industry practices. This federal initiative signals a tension between state‑driven innovation and the desire for a unified national approach.
Across the Atlantic, the European Union’s AI Act advances its risk‑based regime. Limited‑risk transparency obligations are already in force, while high‑risk requirements become applicable on August 2 2026. The Act imposes strict conformity assessments, mandates documentation of training data, and levies fines of up to €35 million or 7 % of global turnover for non‑compliance. These penalties create a powerful incentive for companies to align their AI systems with EU standards.
China, meanwhile, has instituted mandatory watermarking and labeling for AI‑generated content, effective September 2025. The regulations require that any generative output be clearly marked to prevent misinformation and to ensure accountability. Enforcement mechanisms include hefty fines and potential restrictions on the deployment of non‑compliant models.
The convergence of these diverse regimes creates a “layered compliance environment.” Companies operating across state lines in the United States must map the specific obligations that apply to each jurisdiction, implement automated compliance tooling, and allocate resources for ongoing audits and reporting. While the fragmented U.S. approach increases administrative overhead, it also encourages higher standards of safety and transparency in the short term.
Federal preemption, as envisaged by the Executive Order, could streamline obligations by establishing a national baseline that supersedes conflicting state provisions. However, such preemption may also slow the pace of state‑level innovation that currently drives rapid adoption of safety measures. Internationally, the EU’s harmonized framework and China’s content‑labeling rules pressure U.S. firms to adopt globally consistent governance practices to avoid fines and market access barriers.
For organizations seeking to navigate this complex landscape, key strategies include:
- Conducting a comprehensive inventory of AI systems to determine which models qualify as high‑risk or frontier under each jurisdiction.
- Implementing continuous risk‑assessment processes that align with NIST’s AI Risk Management Framework and the specific disclosure requirements of California, Colorado, Texas and New York.
- Establishing cross‑functional governance teams that can respond to state audits, produce transparency reports, and maintain the documentation needed for EU conformity assessments.
- Deploying watermarking and labeling solutions to satisfy China’s generative‑AI regulations and to future‑proof content distribution pipelines.
- Monitoring federal policy developments, particularly the outcomes of the March 2026 evaluation, to adjust compliance programs in anticipation of potential preemptive actions.
In summary, 2026 marks a pivotal year for AI regulation, with U.S. states taking the lead in establishing detailed safety and transparency mandates, the European Union tightening its risk‑based enforcement, and China enforcing content‑labeling standards. Companies must adopt adaptable, multi‑jurisdictional compliance frameworks to mitigate legal risk, foster trust, and sustain innovation in an increasingly regulated AI ecosystem.
