There you have it! Someone just poisoned the #Python package that manages AI API keys across thou…
March 25, 2026 · 0 likes · 0 comments
AI Cybersecurity Workforce
There you have it! Someone just poisoned the #Python package that manages AI API keys across thousands of companies. 97 million downloads a month. A simple pip install was enough to steal everything on your machine.
LiteLLM — the open-source proxy that routes your OpenAI, Anthropic, Google, and Amazon AI credentials through one place. Versions 1.82.7 and 1.82.8 were published directly to PyPI on March 24 with credential-stealing malware baked in. No code on GitHub. No release tag. No review. The malware fired the SECOND the package existed on your machine — you didn't even need to import it.
This is exactly why Ask Sage, a BigBear.ai Company built its own LLM routing stack from scratch instead of using LiteLLM. The security in AI-related open-source projects is almost NON-EXISTENT. Everyone wants to move fast with AI. Nobody wants to audit what's underneath it.
The attack chain gets worse every sentence.
A group called TeamPCP compromised #Trivy FIRST — a security scanning tool (ironic, isn't it?). LiteLLM used Trivy in its own CI pipeline. So they stole the credentials from the SECURITY product and used them to hijack the AI product that holds ALL your other credentials. Then they hit GitHub Actions. Then Docker Hub. Then npm. Then Open VSX. FIVE package ecosystems in two weeks. Each breach giving them the keys to the next one.
The payload was three stages: harvest every SSH key, cloud token, Kubernetes secret, crypto wallet, and .env file. Deploy privileged containers across every cluster node. Install a persistent backdoor.
A developer found it ONLY because the malware was so poorly written it crashed his computer. The attacker apparently vibe-coded the payload — it used so much RAM the machine died. He investigated and found LiteLLM had been pulled in through a Cursor MCP plugin he didn't even know he had.
If the code had been cleaner? Nobody notices for weeks. Maybe months.
Now think about what happens NEXT. Millions of credentials were just stolen. Every one of those stolen keys unlocks MORE systems. MORE repositories. MORE CI pipelines. MORE packages. The floodgates are OPEN. Each compromised credential leads to more supply chain breaches, which leads to more stolen credentials, which leads to more breaches. It's a cascade — and we're at the beginning of it.
AI-powered attacks are coming. They won't vibe-code their malware next time. If you're not deploying AI-powered DEFENSES, you're bringing a knife to a gunfight. I have an entire chapter about this in my book REPLACEMENT — coming Q3 2026. You're going to want to read it.
TeamPCP posted on Telegram: "Many of your favourite security tools and open-source projects will be targeted in the months to come. Stay tuned."
If you're running LiteLLM — pin to 1.82.6 IMMEDIATELY. Rotate EVERY credential. And start asking yourself: do you actually know what's in your supply chain?
LiteLLM — the open-source proxy that routes your OpenAI, Anthropic, Google, and Amazon AI credentials through one place. Versions 1.82.7 and 1.82.8 were published directly to PyPI on March 24 with credential-stealing malware baked in. No code on GitHub. No release tag. No review. The malware fired the SECOND the package existed on your machine — you didn't even need to import it.
This is exactly why Ask Sage, a BigBear.ai Company built its own LLM routing stack from scratch instead of using LiteLLM. The security in AI-related open-source projects is almost NON-EXISTENT. Everyone wants to move fast with AI. Nobody wants to audit what's underneath it.
The attack chain gets worse every sentence.
A group called TeamPCP compromised #Trivy FIRST — a security scanning tool (ironic, isn't it?). LiteLLM used Trivy in its own CI pipeline. So they stole the credentials from the SECURITY product and used them to hijack the AI product that holds ALL your other credentials. Then they hit GitHub Actions. Then Docker Hub. Then npm. Then Open VSX. FIVE package ecosystems in two weeks. Each breach giving them the keys to the next one.
The payload was three stages: harvest every SSH key, cloud token, Kubernetes secret, crypto wallet, and .env file. Deploy privileged containers across every cluster node. Install a persistent backdoor.
A developer found it ONLY because the malware was so poorly written it crashed his computer. The attacker apparently vibe-coded the payload — it used so much RAM the machine died. He investigated and found LiteLLM had been pulled in through a Cursor MCP plugin he didn't even know he had.
If the code had been cleaner? Nobody notices for weeks. Maybe months.
Now think about what happens NEXT. Millions of credentials were just stolen. Every one of those stolen keys unlocks MORE systems. MORE repositories. MORE CI pipelines. MORE packages. The floodgates are OPEN. Each compromised credential leads to more supply chain breaches, which leads to more stolen credentials, which leads to more breaches. It's a cascade — and we're at the beginning of it.
AI-powered attacks are coming. They won't vibe-code their malware next time. If you're not deploying AI-powered DEFENSES, you're bringing a knife to a gunfight. I have an entire chapter about this in my book REPLACEMENT — coming Q3 2026. You're going to want to read it.
TeamPCP posted on Telegram: "Many of your favourite security tools and open-source projects will be targeted in the months to come. Stay tuned."
If you're running LiteLLM — pin to 1.82.6 IMMEDIATELY. Rotate EVERY credential. And start asking yourself: do you actually know what's in your supply chain?