Close Menu
GeekBlog

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    This new ‘mobile graphics card’ is the world’s first to support full-scene ray tracing

    September 24, 2025

    The World’s Oceans Are Hurtling Toward Breaking Point

    September 24, 2025

    When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

    September 24, 2025
    Facebook X (Twitter) Instagram Threads
    GeekBlog
    • Home
    • Mobile
    • Reviews
    • Tech News
    • Deals & Offers
    • Gadgets
      • How-To Guides
    • Laptops & PCs
      • AI & Software
    • Blog
    Facebook X (Twitter) Instagram
    GeekBlog
    Home»AI & Software»Proliferation of on-premise GenAI platforms is widening security risks
    AI & Software

    Proliferation of on-premise GenAI platforms is widening security risks

    Michael ComaousBy Michael ComaousAugust 4, 2025No Comments4 Mins Read0 Views
    Share Facebook Twitter Pinterest LinkedIn Tumblr Email Copy Link
    Proliferation of on-premise GenAI platforms is widening security risks
    Share
    Facebook Twitter LinkedIn Pinterest Email Copy Link

    The three months to the end of May this year saw a 50% spike in the use of generative artificial intelligence (GenAI) platforms among enterprise end users, and while security teams work to facilitate the safe adoption of software-as-a-service (SaaS) AI frameworks such as Azure OpenAI, Amazon Bedrock and Google Vertex AI, the use of unsanctioned on-premise shadow AI now accounts for half of AI application adoption in the enterprise and is compounding security risks, according to a report.

    The study, compiled by data protection and threat prevention platform supplier Netskope, examined the gathering shift among users to relying on on-premise GenAI platforms, which they are mostly using to build out their own AI agents and applications.

    These platforms, which include tools such as Ollama, LM Studio and Ramalama, are now the fastest-growing category of shadow AI, due to their relative ease of use and flexibility, said Netskope. But, in using them to expedite their projects, employees are granting the platforms access to enterprise data stores and leaving the doors wide open to data leakage or outright theft.

    “The rapid growth of shadow AI places the onus on organisations to identify who is creating new AI apps and AI agents using GenAI platforms and where they are building and deploying them,” said Ray Canzanese, director of Netskope Threat Labs.

    “Security teams don’t want to hamper employee end users’ innovation aspirations, but AI usage is only going to increase. To safeguard this innovation, organisations need to overhaul their AI app controls and evolve their DLP [data loss prevention] policies to incorporate real-time user coaching elements.”

    Probably the most popular way to use GenAI locally is to deploy a large language model (LLM) interface, which enables interaction with various models from the same “store front”.

    Ollama is the most popular of these frameworks by some margin. However, unlike the most widely used SaaS options, it does not include inbuilt authentication, which means users must go out of their way to deploy it behind a reverse proxy or a private access solution that is appropriately secured with fit-for-purpose authentication. This is not an easy ask for the average user.

    Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place
    Netskope report

    Furthermore, while OpenAI, Bedrock, Vertex et al provide guardrails against model abuse, Ollama users must take steps themselves to prevent misuse.

    Netskope said that while on-premise GenAI does have some benefits – for example, it can help organisations leverage pre-existing investment in GPU resources, or help them build tools that better interact with their other on-premise systems and datasets – these may well be outweighed by the fact that in using them, organisations bear sole responsibility for the security of their GenAI infrastructure in a way that would not be happening with a SaaS-based option.

    Netskope’s analysts are now tracking approximately 1,550 distinct GenAI SaaS applications, which its customers can easily identify by running focused searches for unapproved apps and personal logins within its platform for activity classed as “generative AI”. Another way to track usage is to monitor who is accessing AI marketplaces such as Hugging Face.

    Besides identifying the use of such tools, IT and security leaders should consider formulating and enforcing policies that restrict employee access to approved services, blocking unapproved ones, implementing DLP to account for data sharing in GenAI tools, and adopting real-time user coaching to nudge users towards approved tools and sensible practice.

    Adopting continuous monitoring of GenAI use and conducting an inventory of local GenAI infrastructure against frameworks provided by the likes of NIST, OWASP and Mitre is also advisable.

    “Agentic shadow AI is like a person coming into your office every day, handling data, taking actions on systems, and all while not being background-checked or having security monitoring in place,” warned the report’s authors.

    GenAI onpremise platforms Proliferation risks Security widening
    Follow on Google News Follow on Flipboard
    Share. Facebook Twitter Pinterest LinkedIn Tumblr Telegram Email Copy Link
    Previous ArticleWi-Fi smart bulbs are cheap and easy. They can also leave you hanging
    Next Article One more reason to stick with wired earbuds? Kamala Harris warns ‘I’m just telling you that’s a little bit more secure’ than wireless earbuds after her experience in intelligence briefings
    Michael Comaous
    • Website

    Related Posts

    3 Mins Read

    People in Arizona will soon need to prove their age to access adult sites – and critics warn of privacy risks

    6 Mins Read

    Apple’s latest iPhone security feature just made life more difficult for spyware makers

    4 Mins Read

    Robinhood embraces copy trading after warning competitors about regulatory risks

    2 Mins Read

    Apple says the iPhone 17 comes with a massive security upgrade

    2 Mins Read

    Trump’s move of SPACECOM to Alabama has little to do with national security

    2 Mins Read

    New malware exploits trusted Windows drivers to get around security systems – here’s how to stay safe

    Top Posts

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202529 Views

    What founders need to know before choosing their exit at Disrupt 2025

    August 8, 202516 Views

    Grok rolls out AI video creator for X with bonus “spicy” mode

    August 7, 202514 Views
    Stay In Touch
    • Facebook
    • YouTube
    • TikTok
    • WhatsApp
    • Twitter
    • Instagram
    Latest Reviews

    Subscribe to Updates

    Get the latest tech news from FooBar about tech, design and biz.

    Most Popular

    8BitDo Pro 3 review: better specs, more customization, minor faults

    August 8, 202529 Views

    What founders need to know before choosing their exit at Disrupt 2025

    August 8, 202516 Views

    Grok rolls out AI video creator for X with bonus “spicy” mode

    August 7, 202514 Views
    Our Picks

    This new ‘mobile graphics card’ is the world’s first to support full-scene ray tracing

    September 24, 2025

    The World’s Oceans Are Hurtling Toward Breaking Point

    September 24, 2025

    When “no” means “yes”: Why AI chatbots can’t process Persian social etiquette

    September 24, 2025

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    Facebook X (Twitter) Instagram Pinterest Threads
    • About Us
    • Contact us
    • Privacy Policy
    • Disclaimer
    • Terms and Conditions
    © 2025 geekblog. Designed by Pro.

    Type above and press Enter to search. Press Esc to cancel.