Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    CDC spiraled into chaos this week. Here’s where things stand.

    August 30, 2025

    Apple’s iPhone 17 ‘Awe dropping’ event is on September 9 — Here’s what we expect

    August 30, 2025

    Cracks are forming in Meta’s partnership with Scale AI

    August 30, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook X (Twitter) Instagram
    GeekBlog
    Home»Tech News»Why it’s a mistake to ask chatbots about their mistakes
    Tech News

    Why it’s a mistake to ask chatbots about their mistakes

    Michael ComaousBy Michael ComaousAugust 13, 2025No Comments2 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    The Thinker by Auguste Rodin - stock photo
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    The randomness inherent in AI text generation compounds this problem. Even with identical prompts, an AI model might give slightly different responses about its own capabilities each time you ask.

    Other layers also shape AI responses

    Even if a language model somehow had perfect knowledge of its own workings, other layers of AI chatbot applications might be completely opaque. For example, modern AI assistants like ChatGPT aren’t single models but orchestrated systems of multiple AI models working together, each largely “unaware” of the others’ existence or capabilities. For instance, OpenAI uses separate moderation layer models whose operations are completely separate from the underlying language models generating the base text.

    When you ask ChatGPT about its capabilities, the language model generating the response has no knowledge of what the moderation layer might block, what tools might be available in the broader system, or what post-processing might occur. It’s like asking one department in a company about the capabilities of a department it has never interacted with.

    Perhaps most importantly, users are always directing the AI’s output through their prompts, even when they don’t realize it. When Lemkin asked Replit whether rollbacks were possible after a database deletion, his concerned framing likely prompted a response that matched that concern—generating an explanation for why recovery might be impossible rather than accurately assessing actual system capabilities.

    This creates a feedback loop where worried users asking “Did you just destroy everything?” are more likely to receive responses confirming their fears, not because the AI system has assessed the situation, but because it’s generating text that fits the emotional context of the prompt.

    A lifetime of hearing humans explain their actions and thought processes has led us to believe that these kinds of written explanations must have some level of self-knowledge behind them. That’s just not true with LLMs that are merely mimicking those kinds of text patterns to guess at their own capabilities and flaws.

    Chatbots Mistake mistakes
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleThe best soundbars to boost your TV audio in 2025
    Next Article This Might Be the Most Massive Black Hole Ever Discovered
    Michael Comaous
    • Website

    Related Posts

    2 Mins Read

    CDC spiraled into chaos this week. Here’s where things stand.

    5 Mins Read

    Apple’s iPhone 17 ‘Awe dropping’ event is on September 9 — Here’s what we expect

    7 Mins Read

    Cracks are forming in Meta’s partnership with Scale AI

    11 Mins Read

    Showrunner wants to turn you into a prompter for the ‘Netflix of AI’

    2 Mins Read

    Warner Bros. Shifts ‘Mortal Kombat II’ Release Date to Summer 2026

    4 Mins Read

    Magic: The Gathering PAX Panel Previews Seriously Sinister Supervillains

    Top Posts

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202512 Views

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 2, 202512 Views

    Framework Desktop Review: A Delightful Surprise

    August 7, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202512 Views

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 2, 202512 Views

    Framework Desktop Review: A Delightful Surprise

    August 7, 202511 Views
    Our Picks

    CDC spiraled into chaos this week. Here’s where things stand.

    August 30, 2025

    Apple’s iPhone 17 ‘Awe dropping’ event is on September 9 — Here’s what we expect

    August 30, 2025

    Cracks are forming in Meta’s partnership with Scale AI

    August 30, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.