Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Meta will sell you refurbished Ray-Ban smart glasses for $76 off – how to find them

    August 30, 2025

    Garmin Fenix 8 Pro rumors swirl, and new leaks point to 4 new subscription tiers – mere months after the Connect+ debacle

    August 30, 2025

    SSA Whistleblower’s Resignation Email Mysteriously Disappeared From Inboxes

    August 29, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook X (Twitter) Instagram
    GeekBlog
    Home»Tech News»Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
    Tech News

    Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 

    Michael ComaousBy Michael ComaousAugust 16, 2025No Comments2 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    Anthropic says some Claude models can now end ‘harmful or abusive’ conversations 
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    Anthropic has announced new capabilities that will allow some of its newest, largest models to end conversations in what the company describes as “rare, extreme cases of persistently harmful or abusive user interactions.” Strikingly, Anthropic says it’s doing this not to protect the human user, but rather the AI model itself.

    To be clear, the company isn’t claiming that its Claude AI models are sentient or can be harmed by their conversations with users. In its own words, Anthropic remains “highly uncertain about the potential moral status of Claude and other LLMs, now or in the future.”

    However, its announcement points to a recent program created to study what it calls “model welfare” and says Anthropic is essentially taking a just-in-case approach, “working to identify and implement low-cost interventions to mitigate risks to model welfare, in case such welfare is possible.”

    This latest change is currently limited to Claude Opus 4 and 4.1. And again, it’s only supposed to happen in “extreme edge cases,” such as “requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror.”

    While those types of requests could potentially create legal or publicity problems for Anthropic itself (witness recent reporting around how ChatGPT can potentially reinforce or contribute to its users’ delusional thinking), the company says that in pre-deployment testing, Claude Opus 4 showed a “strong preference against” responding to these requests and a “pattern of apparent distress” when it did so.

    As for these new conversation-ending capabilities, the company says, “In all cases, Claude is only to use its conversation-ending ability as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted, or when a user explicitly asks Claude to end a chat.”

    Anthropic also says Claude has been “directed not to use this ability in cases where users might be at imminent risk of harming themselves or others.”

    Techcrunch event

    San Francisco
    |
    October 27-29, 2025

    When Claude does end a conversation, Anthropic says users will still be able to start new conversations from the same account, and to create new branches of the troublesome conversation by editing their responses.

    “We’re treating this feature as an ongoing experiment and will continue refining our approach,” the company says.

    abusive Anthropic Claude conversations harmful Models
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleThe Pixel 9 Pro Fold is $600 off ahead of the new model’s debut
    Next Article Roblox cracks down on its user-created content following multiple child safety lawsuits
    Michael Comaous
    • Website

    Related Posts

    5 Mins Read

    Meta will sell you refurbished Ray-Ban smart glasses for $76 off – how to find them

    3 Mins Read

    Garmin Fenix 8 Pro rumors swirl, and new leaks point to 4 new subscription tiers – mere months after the Connect+ debacle

    4 Mins Read

    SSA Whistleblower’s Resignation Email Mysteriously Disappeared From Inboxes

    1 Min Read

    The fight against labeling long-term streaming rentals as “purchases” you “buy”

    2 Mins Read

    Libby is adding an AI book recommendation feature

    3 Mins Read

    TikTok now lets users send voice notes and images in DMs

    Top Posts

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202512 Views

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 2, 202512 Views

    Framework Desktop Review: A Delightful Surprise

    August 7, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202512 Views

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 2, 202512 Views

    Framework Desktop Review: A Delightful Surprise

    August 7, 202511 Views
    Our Picks

    Meta will sell you refurbished Ray-Ban smart glasses for $76 off – how to find them

    August 30, 2025

    Garmin Fenix 8 Pro rumors swirl, and new leaks point to 4 new subscription tiers – mere months after the Connect+ debacle

    August 30, 2025

    SSA Whistleblower’s Resignation Email Mysteriously Disappeared From Inboxes

    August 29, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.