Researchers Warn Open-Source AI Models Vulnerable to Criminal Hijackng
Researchers found 7.5% of 175,000 exposed Ollama AI deployments could enable harmful uses including hacking and data theft, with 30% of hosts in China and 20% in the US.
- On Thursday, SentinelLABS and Censys found thousands of open-source LLM deployments exposed worldwide, vulnerable to exploitation, researchers said.
- Guardrail failures left open-source LLM instances unsafe as many public internet hosts enabled tool-calling, vision, and uncensored prompts, and these unmonitored deployments likely go unnoticed, researchers said.
- Notably, the scan found 175,108 unique Ollama hosts in 130 countries, with roughly 30% in China and about 20% in the US, and system prompts visible in roughly a quarter of LLMs.
- Researchers warned the greatest risks include resource hijacking, remote privileged execution, and identity laundering affecting victim infrastructure, with 7.5% of affected deployments enabling harmful activity.
- Researchers concluded that `LLMs are increasingly deployed to the edge to translate instructions into actions` and `must be treated with the same authentication, monitoring, and network controls as other externally accessible infrastructure`; originating labs retain a duty to document risks and provide mitigation tools.
23 Articles
23 Articles
AI open models have benefits. So why aren’t they more widely used?
Open-source AI models vulnerable to criminal misuse, researchers warn
Hackers and other criminals can easily commandeer computers operating open-source large language models outside the guardrails and constraints of the major artificial-intelligence platforms, creating security risks and vulnerabilities, researchers said on Thursday.
Open-source AI models without guardrails vulnerable to criminal misuse, researchers warn
Cybersecurity researchers from SentinelOne and Censys have reportedly issued a warning regarding the susceptibility of open-source Large Language Models to criminal exploitation. The research...
Cybercriminals adopt AI to scale massive attacks and affect businesses San José, 01 Feb (elmundo.cr) – Artificial Intelligence (IA) ceased to be just a tool for innovation, today it has also become a resource for cybercrime. According to the recent Unit 42 analysis, Palo Alto Networks' threat intelligence team, in which they warn that so-called large-scale language models modified to remove security restrictions (known as dark LLMs) are driving …
Coverage Details
Bias Distribution
- 62% of the sources are Center
Factuality
To view factuality data please Upgrade to Premium











