Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Thinking Machines Lab wants to make AI models more consistent

    September 10, 2025

    Ted Cruz’s new bill would let AI companies set their own rules for up to 10 years

    September 10, 2025

    Democrats Call for Congressional Probe Into Ties Between Jeffrey Epstein and Peter Thiel

    September 10, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook X (Twitter) Instagram
    GeekBlog
    Home»Tech News»AI Lies Because It’s Telling You What It Thinks You Want to Hear
    Tech News

    AI Lies Because It’s Telling You What It Thinks You Want to Hear

    Michael ComaousBy Michael ComaousSeptember 10, 2025No Comments6 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    Person using computer with artificial intelligence with command prompt.
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    Generative AI is popular for a variety of reasons, but with that popularity comes a serious problem. These chatbots often deliver incorrect information to people looking for answers. Why does this happen? It comes down to telling people what they want to hear.  

    While many generative AI tools and chatbots have mastered sounding convincing and all-knowing, new research conducted by Princeton University shows that the people-pleasing nature of AI comes at a steep price. As these systems become more popular, they become more indifferent to the truth. 

    AI models, like people, respond to incentives. Compare the problem of large language models producing inaccurate information to that of doctors being more likely to prescribe addictive painkillers when they’re evaluated based on how well they manage patients’ pain. An incentive to solve one problem (pain) led to another problem (overprescribing).

    AI Atlas art badge tag

    In the past few months, we’ve seen how AI can be biased and even cause psychosis. There was a lot of talk about AI “sycophancy,” when an AI chatbot is quick to flatter or agree with you, with OpenAI’s GPT-4o model. But this particular phenomenon, which the researchers call “machine bullshit,” is different. 

    “[N]either hallucination nor sycophancy fully capture the broad range of systematic untruthful behaviors commonly exhibited by LLMs,” the Princeton study reads. “For instance, outputs employing partial truths or ambiguous language — such as the paltering and weasel-word examples — represent neither hallucination nor sycophancy but closely align with the concept of bullshit.”

    Read more: OpenAI CEO Sam Altman Believes We’re in an AI Bubble

    Don’t miss any of CNET’s unbiased tech content and lab-based reviews. Add us as a preferred Google source on Chrome.

    How machines learn to lie

    To get a sense of how AI language models become crowd pleasers, we must understand how large language models are trained. 

    There are three phases of training LLMs:

    • Pretraining, in which models learn from massive amounts of data collected from the internet, books or other sources.
    • Instruction fine-tuning, in which models are taught to respond to instructions or prompts.
    • Reinforcement learning from human feedback, in which they’re refined to produce responses closer to what people want or like.

    The Princeton researchers found the root of the AI misinformation tendency is the reinforcement learning from human feedback, or RLHF, phase. In the initial stages, the AI models are simply learning to predict statistically likely text chains from massive datasets. But then they’re fine-tuned to maximize user satisfaction. Which means these models are essentially learning to generate responses that earn thumbs-up ratings from human evaluators. 

    LLMs try to appease the user, creating a conflict when the models produce answers that people will rate highly, rather than produce truthful, factual answers. 

    Vincent Conitzer, a professor of computer science at Carnegie Mellon University who was not affiliated with the study, said companies want users to continue “enjoying” this technology and its answers, but that might not always be what’s good for us. 

    “Historically, these systems have not been good at saying, ‘I just don’t know the answer,’ and when they don’t know the answer, they just make stuff up,” Conitzer said. “Kind of like a student on an exam that says, well, if I say I don’t know the answer, I’m certainly not getting any points for this question, so I might as well try something. The way these systems are rewarded or trained is somewhat similar.” 

    The Princeton team developed a “bullshit index” to measure and compare an AI model’s internal confidence in a statement with what it actually tells users. When these two measures diverge significantly, it indicates the system is making claims independent of what it actually “believes” to be true to satisfy the user.

    The team’s experiments revealed that after RLHF training, the index nearly doubled from 0.38 to close to 1.0. Simultaneously, user satisfaction increased by 48%. The models had learned to manipulate human evaluators rather than provide accurate information. In essence, the LLMs were “bullshitting,” and people preferred it.

    Getting AI to be honest 

    Jaime Fernández Fisac and his team at Princeton introduced this concept to describe how modern AI models skirt around the truth. Drawing from philosopher Harry Frankfurt’s influential essay “On Bullshit,” they use this term to distinguish this LLM behavior from honest mistakes and outright lies.

    The Princeton researchers identified five distinct forms of this behavior:

    • Empty rhetoric: Flowery language that adds no substance to responses.
    • Weasel words: Vague qualifiers like “studies suggest” or “in some cases” that dodge firm statements.
    • Paltering: Using selective true statements to mislead, such as highlighting an investment’s “strong historical returns” while omitting high risks.
    • Unverified claims: Making assertions without evidence or credible support.
    • Sycophancy: Insincere flattery and agreement to please.

    To address the issues of truth-indifferent AI, the research team developed a new method of training, “Reinforcement Learning from Hindsight Simulation,” which evaluates AI responses based on their long-term outcomes rather than immediate satisfaction. Instead of asking, “Does this answer make the user happy right now?” the system considers, “Will following this advice actually help the user achieve their goals?”

    This approach takes into account the potential future consequences of the AI advice, a tricky prediction that the researchers addressed by using additional AI models to simulate likely outcomes. Early testing showed promising results, with user satisfaction and actual utility improving when systems are trained this way.

    Conitzer said, however, that LLMs are likely to continue being flawed. Because these systems are trained by feeding them lots of text data, there’s no way to ensure that the answer they give makes sense and is accurate every time.

    “It’s amazing that it works at all but it’s going to be flawed in some ways,” he said. “I don’t see any sort of definitive way that somebody in the next year or two … has this brilliant insight, and then it never gets anything wrong anymore.”

    AI systems are becoming part of our daily lives so it will be key to understand how LLMs work. How do developers balance user satisfaction with truthfulness? What other domains might face similar trade-offs between short-term approval and long-term outcomes? And as these systems become more capable of sophisticated reasoning about human psychology, how do we ensure they use those abilities responsibly?

    Read more: ‘Machines Can’t Think for You.’ How Learning Is Changing in the Age of AI

    hear Lies telling thinks
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleTrade in your old phone and get up to $1,100 off a new iPhone 17 at AT&T – here’s how
    Next Article Democrats Call for Congressional Probe Into Ties Between Jeffrey Epstein and Peter Thiel
    Michael Comaous
    • Website

    Related Posts

    3 Mins Read

    Thinking Machines Lab wants to make AI models more consistent

    2 Mins Read

    Ted Cruz’s new bill would let AI companies set their own rules for up to 10 years

    4 Mins Read

    Democrats Call for Congressional Probe Into Ties Between Jeffrey Epstein and Peter Thiel

    5 Mins Read

    Trade in your old phone and get up to $1,100 off a new iPhone 17 at AT&T – here’s how

    2 Mins Read

    US Department of Defense issues strict new cyber rules for potential contractors

    3 Mins Read

    Melania Trump’s AI Era Is Upon Us

    Top Posts

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202525 Views

    What founders need to know before choosing their exit at Disrupt 2025

    August 8, 202514 Views

    Grok rolls out AI video creator for X with bonus “spicy” mode

    August 7, 202514 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202525 Views

    What founders need to know before choosing their exit at Disrupt 2025

    August 8, 202514 Views

    Grok rolls out AI video creator for X with bonus “spicy” mode

    August 7, 202514 Views
    Our Picks

    Thinking Machines Lab wants to make AI models more consistent

    September 10, 2025

    Ted Cruz’s new bill would let AI companies set their own rules for up to 10 years

    September 10, 2025

    Democrats Call for Congressional Probe Into Ties Between Jeffrey Epstein and Peter Thiel

    September 10, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.