Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Stop falling for scams when Norton’s antivirus software is 70% off right now

    March 28, 2026

    Acer Promo Codes and Deals: Save 40% on Bundles

    March 28, 2026

    Playing Wolfenstein 3D with one hand in 2026

    March 28, 2026
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Tech News
    • Blog
    • How-To Guides
    • AI & Software
    Facebook
    GeekBlog
    Home»Tech News»What the nation’s strongest AI regulations change in 2026, according to legal experts
    Tech News

    What the nation’s strongest AI regulations change in 2026, according to legal experts

    Michael ComaousBy Michael ComaousJanuary 16, 20268 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    What the nation's strongest AI regulations change in 2026, according to legal experts
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    imaginima/iStock/Getty Images Plus via Getty Images

    Follow ZDNET: Add us as a preferred source on Google.


    ZDNET’s key takeaways 

    • Two major state AI laws are in effect as federal AI regulation remains unclear.
    • Trump has renewed attacks on state AI legislation. 
    • Researchers are still dissatisfied with current laws. 

    We’re a few weeks into 2026, and the Trump administration has yet to propose AI legislation at the federal level. 

    At the same time, first-of-their-kind AI safety laws in California and New York — both states well-positioned to influence tech companies — have gone into effect. But a December executive order and its ensuing task force have renewed attacks on state AI laws. What do the new laws mean in practice, and can they survive scrutiny at the federal level? 

    Also: 5 ways rules and regulations can help guide your AI innovation

    SB-53 and the RAISE Act

    California SB-53, the new AI safety law that went into effect on January 1, requires model developers to publicize how they’ll mitigate the biggest risks posed by AI, and to report on safety incidents involving their models (or face fines of up to $1 million if they don’t). Though not as thorough as previously attempted legislation in the state, the new law is practically the only one in a highly unregulated AI landscape. Most recently, it was joined by the RAISE Act, passed in New York at the end of December, which is similar to the California law. 

    The RAISE Act, in comparison, also lays out reporting requirements for safety incidents involving models of all sizes, but has an upper fine threshold of $3 million after a company’s first violation. While SB 53 mandates that companies notify the state within 15 days of a safety incident, RAISE requires notification within 72 hours. 

    Also: The AI balancing act your company can’t afford to fumble in 2026

    SB 1047, an earlier version of SB 53, would have required AI labs to safety-test models costing over $100 million and to develop a shutdown mechanism, or kill switch, to control them should they misbehave. That bill failed in the face of arguments that it would stifle job creation and innovation, a common response to regulation efforts, especially from the current administration. 

    SB 53 uses a lighter hand. Like the RAISE Act, it targets companies with gross annual revenue of more than $500 million, a threshold that exempts many smaller AI startups from the law’s reporting and documentation requirements. 

    “It’s interesting that there is this revenue threshold, especially since there has been the introduction of a lot of leaner AI models that can still engage in a lot of processing, but can be deployed by smaller companies,” data protection lawyer Lily Li, who founded Metaverse Law, told ZDNET. She noted that Gov. Gavin Newsom vetoed SB-1047, in part, because it would impose growth-inhibiting costs on smaller companies, a concern also echoed by lobbying groups. 

    Also: Worried AI will take your remote job? You’re safe for now, this study shows

    “I do think it’s more politically motivated than necessarily driven by differences in the potential harm or impact of AI based on the size of the company or the size of the model,” she said of the threshold. 

    Compared to SB-1047, SB-53 focuses more on transparency, documentation, and reporting than on actual harm. The law creates requirements for guardrails around catastrophic risks: cyber, chemical, biological, radiological, and nuclear weapon attacks, bodily harm, assault, or situations where developers lose control of an AI system. 

    Renewed limits on state AI legislation 

    The current administration and companies arguing against AI safety regulation, especially at the federal level, say it would slow development, harm jobs in the tech sector, and cede ground in the AI race to countries like China. 

    On Dec. 11, President Trump signed an executive order stating a renewed intention to centralize AI laws at the federal level to ensure US companies are “free to innovate without cumbersome regulation.” The order argues that “excessive State regulation thwarts this imperative” by creating a patchwork of differing laws, some of which it alleges “are increasingly responsible for requiring entities to embed ideological bias within models.” 

    Also: Senate removes ban on state AI regulations from Trump’s tax bill

    It also announced an AI Litigation Task Force to challenge state laws that are inconsistent with a “minimally burdensome national policy framework for AI.” This renews an attempt by Congress this past summer to ban states from passing AI regulations for 10 years, which would have withheld broadband and AI infrastructure funds from states that did not comply. The moratorium was defeated in a landslide, temporarily preserving states’ rights to legislate AI in their territory. 

    Last week, CBS News reported that the Justice Department was forming the task force. An internal memo reviewed by CBS News said the task force will be led by either US Attorney General Pam Bondi or another appointee. 

    That said, Li does not expect the new task force to substantially impact state regulation, at least in California. 

    “The AI litigation task force will focus on laws that are unconstitutional under the dormant commerce clause and First Amendment, preempted by federal law, or otherwise unlawful,” she told ZDNET. “The 10th Amendment, however, explicitly reserves rights to the states if there’s no federal law, or if there’s no preemption of state laws by a federal law.” 

    Also: Is AI’s war on busywork a creativity killer? What the experts say

    Besides a new request for information (RFI) from the Center for AI Standards and Innovation (CAISI) — formerly the AI Safety Institute — about the security risks of AI agents, the administration doesn’t appear to have offered a replacement for state laws. 

    “Federal HIPAA requirements allow for states to pass more stringent state healthcare privacy laws,” Li said. “Here, there is no federal AI law that would preempt many of the state laws, and Congress has rebuffed prior efforts to add federal AI preemption to past legislation.”

    Additional protections – and limits 

    California’s SB-53 also requires AI companies to protect whistleblowers. This stood out to Li, who noted that, unlike other parts of the law, which are mirrored in the EU Act and which many companies are therefore already prepared for, whistleblower protections are unique in tech. 

    “There really haven’t been a lot of cases in the AI space, obviously, because it’s new,” Li said. “I think that is a bigger concern for a lot of tech companies, because there is so much turnover in the tech space, and you don’t know what the market’s going to look like. This is something else that companies are worried about as part of the layoff process.” 

    She added that SB 53’s reporting requirements make companies more concerned about creating material that could be used in class-action lawsuits. 

    Also: Why you’ll pay more for AI in 2026, and 3 money-saving tips to try

    Gideon Futerman, special projects associate at the Center for AI Safety, doesn’t think SB 53 will meaningfully impact safety research. 

    “This won’t change the day-to-day much, largely because the EU AI Act already requires these disclosures,” he explained. “SB-53 doesn’t impose any new burden.” 

    Neither law requires that AI labs have their models tested by third parties, though New York’s RAISE Act does mandate annual third-party audits at the time of writing. Still, Futerman considers SB 53 progress. 

    “It shows that AI safety regulation is possible and has political momentum. The amount of real safety work happening today is still far below what is needed,” he said. “Companies racing to build superintelligent AI while admitting these systems could pose extinction-level risks still do not really understand how their models work.” 

    Where this leaves AI safety

    “SB-53’s level of regulation is nothing compared to the dangers, but it’s a worthy first step on transparency and the first enforcement around catastrophic risk in the US. This is where we should have been years ago,” Futerman said. 

    Regardless of state and federal regulations, Li said governance has already become a higher priority for AI companies, driven by their bottom lines. Enterprise customers are pushing liability onto developers, and investors are noting privacy, cybersecurity, and governance in their funding decisions. 

    Also: 5 ways to grow your business with AI – without sidelining your people

    Still, she said that many companies are just flying under the radar of regulators while they can. 

    “Transparency alone doesn’t make systems safe, but it’s a crucial first step,” Futerman said. He hopes future legislation will fill remaining gaps in the national security strategy. 

    “That includes strengthening export controls and chip tracking, improving intelligence on frontier AI projects abroad, and coordinating with other nations on the military applications of AI to prevent unintended escalation,” he added.



    Source: www.zdnet.com

    change experts legal nations regulations strongest
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleMeta’s Layoffs Leave Supernatural Fitness Users in Mourning
    Next Article Kathleen Kennedy Gives Some Intriguing ‘Star Wars’ Movie Updates After Her Lucasfilm Exit
    Michael Comaous
    • Website

    Michael Comaous is a dedicated professional with a passion for technology, innovation, and creative problem-solving. Over the years, he has built experience across multiple industries, combining strategic thinking with hands-on expertise to deliver meaningful results. Michael is known for his curiosity, attention to detail, and ability to explain complex topics in a clear and approachable way. Whether he’s working on new projects, writing, or collaborating with others, he brings energy and a forward-thinking mindset to everything he does.

    Related Posts

    3 Mins Read

    Stop falling for scams when Norton’s antivirus software is 70% off right now

    4 Mins Read

    Acer Promo Codes and Deals: Save 40% on Bundles

    2 Mins Read

    Playing Wolfenstein 3D with one hand in 2026

    7 Mins Read

    Whoop has LeBron – now it wants your mom

    1 Min Read

    Sony temporarily suspends memory card sales due to shortages

    2 Mins Read

    Apple TV is now home to CrunchyRoll anime

    Top Posts

    Discord will require a face scan or ID for full access next month

    February 9, 2026765 Views

    The Mesh Router Placement Strategy That Finally Gave Me Full Home Coverage

    August 4, 2025728 Views

    Trade in your old phone and get up to $1,100 off a new iPhone 17 at AT&T – here’s how

    September 10, 2025321 Views
    Stay In Touch
    • Facebook

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    Discord will require a face scan or ID for full access next month

    February 9, 2026765 Views

    The Mesh Router Placement Strategy That Finally Gave Me Full Home Coverage

    August 4, 2025728 Views

    Trade in your old phone and get up to $1,100 off a new iPhone 17 at AT&T – here’s how

    September 10, 2025321 Views
    Our Picks

    Stop falling for scams when Norton’s antivirus software is 70% off right now

    March 28, 2026

    Acer Promo Codes and Deals: Save 40% on Bundles

    March 28, 2026

    Playing Wolfenstein 3D with one hand in 2026

    March 28, 2026

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2026 GeekBlog

    Type above and press Enter to search. Press Esc to cancel.

    Ad Blocker Enabled!
    Ad Blocker Enabled!
    Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.