Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    The 24 best gifts for book lovers

    August 31, 2025

    ‘Call of Duty’ May Enlist for a Movie Adaptation

    August 31, 2025

    The M4 Mac Mini Offers the Best Value I’ve Seen From an Apple Product, and It’s $54 Off for Labor Day

    August 31, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook X (Twitter) Instagram
    GeekBlog
    Home»Tech News»Hackers discovered a sneaky way to steal data by hiding malicious prompts inside images processed by large language models
    Tech News

    Hackers discovered a sneaky way to steal data by hiding malicious prompts inside images processed by large language models

    Michael ComaousBy Michael ComaousAugust 31, 2025No Comments3 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    Hacker with malware code in computer screen. Cybersecurity, privacy or cyber attack. Programmer or fraud criminal writing virus software. Online firewall and privacy crime. Web data engineer
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    • Malicious prompts remain invisible until image downscaling reveals hidden instructions
    • The attack works by exploiting how AI resamples uploaded images
    • Bicubic interpolation can expose black text from specially crafted images

    As AI tools become more integrated into daily work, the security risks attached to them are also evolving in new directions.

    Researchers at Trail of Bits have demonstrated a method where malicious prompts are hidden inside images and then revealed during processing by large language models.

    The technique takes advantage of how AI platforms downscale images for efficiency, exposing patterns that are invisible in their original form but legible to the algorithm once resized.


    You may like

    Hidden instructions in downscaled images

    The idea builds on a 2020 paper from TU Braunschweig in Germany, which suggested that image scaling could be used as an attack surface for machine learning.

    Trail of Bits showed how crafted images could manipulate systems, including Gemini CLI, Vertex AI Studio, Google Assistant on Android, and Gemini’s web interface.

    In one case, Google Calendar data was siphoned to an external email address without user approval, highlighting the real-world potential of the threat.

    The attack leverages interpolation methods like nearest neighbor, bilinear, or bicubic resampling.

    Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

    When an image is intentionally prepared, downscaling introduces aliasing artifacts that reveal concealed text.

    In a demonstration, dark areas shifted during bicubic resampling to display hidden black text, which the LLM then interpreted as user input.

    From the user’s perspective, nothing unusual appears to happen. Yet behind the scenes, the model follows the embedded instructions along with legitimate prompts.

    To illustrate the risk, Trail of Bits created “Anamorpher,” an open-source tool that generates such images for different scaling methods.

    This shows that while the approach is specialized, it could be repeated by others if defenses are lacking.

    The attack raises questions about trust in multimodal AI systems because many platforms now rely on them for routine work, and a simple image upload could potentially trigger unintended data access.

    The danger of identity theft arises if private or sensitive information is exfiltrated in this way.

    Because these models often link with calendars, communications platforms, or workflow tools, the risk extends into broader contexts.

    To mitigate this, users need to restrict input dimensions, preview downscaled results, and require explicit confirmation for sensitive tool calls.

    Traditional defenses like firewalls are not built to identify this form of manipulation, leaving a gap that attackers may eventually exploit.

    The researchers stress that only layered security suites and stronger design patterns can reliably limit such risks.

    “The strongest defense, however, is to implement secure design patterns and systematic defenses that mitigate impactful prompt injection beyond multimodal prompt injection,” the researchers said.

    You might also like

    Data Discovered Hackers hiding images Language Large Malicious Models processed prompts sneaky steal
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleThese Newly Discovered Cells Breathe in Two Ways
    Next Article The 13+ best Labor Day deals live now: Save on Apple, Samsung and more
    Michael Comaous
    • Website

    Related Posts

    1 Min Read

    The 24 best gifts for book lovers

    2 Mins Read

    ‘Call of Duty’ May Enlist for a Movie Adaptation

    4 Mins Read

    The M4 Mac Mini Offers the Best Value I’ve Seen From an Apple Product, and It’s $54 Off for Labor Day

    2 Mins Read

    The 13+ best Labor Day deals live now: Save on Apple, Samsung and more

    5 Mins Read

    These Newly Discovered Cells Breathe in Two Ways

    2 Mins Read

    Earth models can predict the planet’s future but not their own

    Top Posts

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202512 Views

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 2, 202512 Views

    Framework Desktop Review: A Delightful Surprise

    August 7, 202511 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202512 Views

    WIRED Roundup: ChatGPT Goes Full Demon Mode

    August 2, 202512 Views

    Framework Desktop Review: A Delightful Surprise

    August 7, 202511 Views
    Our Picks

    The 24 best gifts for book lovers

    August 31, 2025

    ‘Call of Duty’ May Enlist for a Movie Adaptation

    August 31, 2025

    The M4 Mac Mini Offers the Best Value I’ve Seen From an Apple Product, and It’s $54 Off for Labor Day

    August 31, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.