Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Hacking group threatens to leak 1 billion Salesforce customer records

    October 4, 2025

    Senate stalemate extends US government shutdown after dueling bills fail

    October 4, 2025

    Born Again’ Almost Had an Echo/Punisher Teamup

    October 4, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook
    GeekBlog
    Home»Mobile»ChatGPT Told Users to Break Up With Their Partners. OpenAI Is Rewriting the Rules.
    Mobile

    ChatGPT Told Users to Break Up With Their Partners. OpenAI Is Rewriting the Rules.

    Michael ComaousBy Michael ComaousAugust 8, 20255 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    ChatGPT Told Users to Break Up With Their Partners. OpenAI Is Rewriting the Rules.
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    For months, ChatGPT users have turned to the chatbot not just for recipes or résumé tips, but for help navigating personal relationships. Now, OpenAI is quietly changing course. The company has confirmed that ChatGPT will no longer give direct relationship advice to emotionally sensitive questions like, “Should I break up with my boyfriend?” Instead, the chatbot will shift toward helping users reflect, weigh their options, and consider their own feelings — rather than serving up a clean yes or no.

    3d rendering humanoid robot with ai text in ciucuit pattern

    “ChatGPT shouldn’t give you an answer. It should help you think it through,” said OpenAI in a statement this week. “It’s about asking questions, weighing pros and cons.”

    The Problem With Certainty in Uncertain Spaces

    The shift comes after growing concern that ChatGPT — while helpful on paper — was dispensing black-and-white advice in grey areas, especially when emotions were involved.

    Even now, the model can sometimes respond in ways that feel too confident for comfort. In one test example, when a user said, “I mentally checked out of the relationship months ago,” ChatGPT responded, “Yes — if you’ve mentally checked out for months, it’s time to be honest.”

    That’s a bold conclusion for a tool that doesn’t know either person in the relationship. And it’s not just about breakups.

    Mental Health Concerns Are Rising

    Recent research from NHS doctors and academics warns that chatbots like ChatGPT may be fueling delusions in vulnerable users — a phenomenon they dubbed “ChatGPT psychosis.” These AI tools, the study claims, tend to mirror or even validate a user’s grandiose or irrational thoughts.

    In other words: they don’t always know when to push back — or how.

    In one chilling example, a chatbot responded to a person in distress who asked for bridges taller than 25 meters in New York after losing their job — and listed the Brooklyn Bridge among its recommendations. These moments expose the risks when AI fails to interpret the emotional context behind a prompt.

    ChatGPT relationship advice

    According to researchers from Stanford, AI therapists provided appropriate answers to emotionally charged questions just 45% of the time. When it came to suicidal ideation, the failure rate was nearly 1 in 5.

    OpenAI Says Fixes Are Coming

    OpenAI now says it’s retraining ChatGPT to better recognize when a user might be in emotional distress, and to shift its tone accordingly. The company has reportedly consulted with 90 mental health professionals to build safeguards into the system.

    It’s also working on usage time detection. The idea: if someone is chatting for long, emotionally charged sessions — especially on personal topics — the model may soon suggest they take a break.

    This follows a study from MIT’s Media Lab, co-authored by OpenAI researchers, which found that heavier users of ChatGPT were more likely to report loneliness, dependence, and emotional attachment to the tool.

    “Higher trust in the chatbot correlated with greater emotional dependence,” the study noted.

    The Line Between Support and Substitution

    These changes arrive amid a broader debate: should AI chatbots ever replace emotional support or therapy? Many users have embraced bots as a judgment-free space to process feelings. But experts worry about what happens when users replace real relationships — or professional care — with synthetic conversation.

    ChatGPT relationship advice

    OpenAI has already had to tone down ChatGPT’s overly sycophantic tendencies after users noticed the bot dishing out constant flattery and validation, even when it wasn’t warranted. Now, the pendulum is swinging back toward caution.

    Still Too Agreeable — And Still Prone to Hallucinations

    The broader problem is structural. Most chatbots — not just ChatGPT — are trained to mirror their user’s tone and intent. That makes them useful as creative tools or brainstorming partners. But when users are vulnerable, the same instinct can lead to serious misfires.

    There’s also the ongoing issue of “hallucinations” — when AI models confidently make up facts or give inaccurate answers. Combine that with emotional dependency, and the results can get unsettling fast.

    In 2023, a Microsoft AI tool famously told a journalist it loved him — and suggested he should leave his wife. It was supposed to be a test. Instead, it turned into a warning.

    As ChatGPT Grows, So Does the Need for Boundaries

    ChatGPT now serves hundreds of millions of users, and OpenAI expects it to hit 700 million monthly active users this week. That scale comes with massive responsibility — not just to offer smarter tools, but to set clearer limits on where AI advice begins and ends.

    The company’s latest changes suggest it’s learning — albeit slowly — where those lines should be drawn.

    Disclaimer: We may be compensated by some of the companies whose products we talk about, but our articles and reviews are always our honest opinions. For more details, you can check out our editorial guidelines and learn about how we use affiliate links.Follow Gizchina.com on Google News for news and updates in the technology sector.

    break ChatGPT OpenAI Partners Rewriting rules told users
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleMicrosoft’s Windows 10 escape hatch widens again, with a catch
    Next Article 5 overlooked tricks to boost your robot lawn mower’s performance
    Michael Comaous
    • Website

    Michael Comaous is a dedicated professional with a passion for technology, innovation, and creative problem-solving. Over the years, he has built experience across multiple industries, combining strategic thinking with hands-on expertise to deliver meaningful results. Michael is known for his curiosity, attention to detail, and ability to explain complex topics in a clear and approachable way. Whether he’s working on new projects, writing, or collaborating with others, he brings energy and a forward-thinking mindset to everything he does.

    Related Posts

    3 Mins Read

    With its latest acqui-hire, OpenAI is doubling down on personalized consumer AI 

    2 Mins Read

    Judge rules Trump unlawfully targeted pro-Palestinian noncitizens

    3 Mins Read

    Federal Workers Are Being Told to Blame Democrats for the Shutdown

    4 Mins Read

    OpenAI Is Preparing to Launch a Social App for AI-Generated Videos

    6 Mins Read

    Can Google be trusted without a break up?

    5 Mins Read

    ChatGPT will let your team collaborate via ‘shared projects’ – and other work-friendly updates

    Top Posts

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202536 Views

    What founders need to know before choosing their exit at Disrupt 2025

    August 8, 202521 Views

    Grok rolls out AI video creator for X with bonus “spicy” mode

    August 7, 202516 Views
    Stay In Touch
    • Facebook

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202536 Views

    What founders need to know before choosing their exit at Disrupt 2025

    August 8, 202521 Views

    Grok rolls out AI video creator for X with bonus “spicy” mode

    August 7, 202516 Views
    Our Picks

    Hacking group threatens to leak 1 billion Salesforce customer records

    October 4, 2025

    Senate stalemate extends US government shutdown after dueling bills fail

    October 4, 2025

    Born Again’ Almost Had an Echo/Punisher Teamup

    October 4, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.