Skip to main content
See every side of every news story
Published loading...Updated

Researchers Warn Open-Source AI Models Vulnerable to Criminal Hijackng

Researchers found 7.5% of 175,000 exposed Ollama AI deployments could enable harmful uses including hacking and data theft, with 30% of hosts in China and 20% in the US.

  • On Thursday, SentinelLABS and Censys found thousands of open-source LLM deployments exposed worldwide, vulnerable to exploitation, researchers said.
  • Guardrail failures left open-source LLM instances unsafe as many public internet hosts enabled tool-calling, vision, and uncensored prompts, and these unmonitored deployments likely go unnoticed, researchers said.
  • Notably, the scan found 175,108 unique Ollama hosts in 130 countries, with roughly 30% in China and about 20% in the US, and system prompts visible in roughly a quarter of LLMs.
  • Researchers warned the greatest risks include resource hijacking, remote privileged execution, and identity laundering affecting victim infrastructure, with 7.5% of affected deployments enabling harmful activity.
  • Researchers concluded that `LLMs are increasingly deployed to the edge to translate instructions into actions` and `must be treated with the same authentication, monitoring, and network controls as other externally accessible infrastructure`; originating labs retain a duty to document risks and provide mitigation tools.
Insights by Ground AI

23 Articles

ReutersReuters
+5 Reposted by 5 other sources
Center

Open-source AI models vulnerable to criminal misuse, researchers warn

Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday.

·United Kingdom
Read Full Article

Cybercriminals adopt AI to scale massive attacks and affect businesses San José, 01 Feb (elmundo.cr) – Artificial Intelligence (IA) ceased to be just a tool for innovation, today it has also become a resource for cybercrime. According to the recent Unit 42 analysis, Palo Alto Networks' threat intelligence team, in which they warn that so-called large-scale language models modified to remove security restrictions (known as dark LLMs) are driving …

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • 62% of the sources are Center
62% Center

Factuality Info Icon

To view factuality data please Upgrade to Premium

Ownership

Info Icon

To view ownership data please Upgrade to Vantage

IT PRO broke the news in on Thursday, June 5, 2025.
Too Big Arrow Icon
Sources are mostly out of (0)

Similar News Topics

News
Feed Dots Icon
For You
Search Icon
Search
Blindspot LogoBlindspotLocal