Supply chain attacks like Shai-Hulud exploit trust in package managers to automate credential theft and malware propagation. Understanding these evolving tactics and adopting OIDC-based trusted publishing is critical for protecting organizational secrets and downstream users.
The open source ecosystem continues to face organized, adaptive supply chain threats that spread through compromised credentials and malicious package lifecycle scripts. The most recent example is the multi-wave Shai-Hulud campaign.
While individual incidents differ in their mechanics and speed, the pattern is consistent: Adversaries learn quickly, target maintainer workflows, and exploit trust boundaries in publication pipelines.
This post distills durable lessons and actions to help maintainers and organizations harden their systems and prepare for the next campaign, not just respond to the last one. We also share more about what’s next on the npm security roadmap over the next two quarters.
Shai-Hulud is a coordinated, multi-wave campaign targeting the JavaScript supply chain and evolved from opportunistic compromises to engineered, targeted attacks.
The first wave focused on abusing compromised maintainer accounts. It injected malicious post install scripts to slip malicious code into packages, exfiltrate secrets, and self-replicate, demonstrating how quickly a single foothold can ripple across dependencies.
The second wave, referred to as Shai-Hulud 2.0, escalated the threat: Its ability to self-replicate and spread via compromised credentials was updated to enable cross-victim credential exposure. The second wave also introduced endpoint command and control via self-hosted runner registration, harvesting a wider range of secrets to fuel further propagation, and destructive functionality. This wave added a focus on CI environments, changing its behavior when it detects it is running in this context and including privilege escalation techniques targeted to certain build agents. It also used a multi-stage payload that was harder to detect than the previous wave payload. The shortened timeline between variants signals an organized adversary studying community defenses and rapidly iterating around them.
Rather than isolated breaches, the Shai-Hulud campaigns target trust boundaries in maintainer workflows and CI publication pipelines, with a focus on credential harvesting and install-time execution. The defining characteristics we see across waves include:
Recent waves in this pattern reinforce that defenders should harden publication models and credential flows proactively, rather than tailoring mitigations to any single variant.
We’re accelerating our security roadmap to address the evolving threat landscape. Moving forward, our immediate focus is on adding support for:
Together, these investments give maintainers stronger, more flexible tools to secure their packages at every stage of the publication process.
Malware like Shai-Hulud often spreads by adding malicious code to npm packages. The malicious code is executed as part of the installation of the package so that any npm user who installs the package is compromised. The malware scavenges the local system for tokens, which it can then use to continue propagating. Since npm packages often have many dependencies, by adding malware to one package, the attacker can indirectly infect many other packages. And by hoarding some of the scavenged tokens rather than using them immediately, the attacker can launch a new campaign weeks or months after the initial compromise.
In the “References” section below, we have included links to longer articles with analysis of recent campaigns and advice on how to stay secure, so we won’t rehash all of that information here. Instead, here is a short summary of our top recommendations:
Note that the above advice is preventative. If you believe you are a victim of an attack and need help securing your GitHub or npm account, please contact GitHub Support.
The post Strengthening supply chain security: Preparing for the next malware campaign appeared first on The GitHub Blog.
Continue reading on the original blog to support the author
Read full articleThis report highlights how complex dependencies—like telemetry, caching, and security policies—can trigger cascading failures. It provides valuable lessons on the importance of robust monitoring, automated rollbacks, and the need for resilient proxy layers in large-scale distributed systems.
This move provides a stable, open-source foundation for AI agent development, standardizing how LLMs securely interact with external systems. It resolves critical integration challenges, accelerating the creation of robust, production-ready AI tools across industries.
This post highlights how rapid scaling and architectural coupling can turn localized issues into platform-wide outages. It provides lessons on managing cache TTLs, the risks of latent configuration errors in failover systems, and the necessity of robust load-shedding mechanisms.
This shift transforms AI from a chat interface into programmable infrastructure. By embedding execution engines into apps, developers can build resilient, context-aware systems that handle complex multi-step tasks without brittle, hard-coded logic or custom orchestration layers.