Software supply chain attacks have become an increasingly common source of risk in modern software systems. To explore mitigation strategies, I developed an experimental methodology and prototype focused on identifying and prioritizing supply chain threats while accounting for differing security requirements across organizational units.
A central premise of this work is that supply chain risk should not be evaluated uniformly across an organization. Different teams and products operate under distinct constraints, risk tolerances, and business impacts. To address this, security policies and risk thresholds are defined using CUE, an open-source constraint language, enabling declarative, fine-grained control over how risk signals are evaluated and enforced per organizational unit.
The system combines multiple analysis techniques to assess both open-source and proprietary dependencies:
Static analysis detects known vulnerability patterns and suspicious constructs in dependencies, such as unsafe APIs and injection primitives.
Dynamic analysis monitors runtime behavior to identify anomalous actions, including unexpected network activity or filesystem access.
Metadata analysis evaluates dependency provenance, maintainer history, release cadence, and repository signals.
External intelligence sources incorporate vulnerability databases and community-driven indicators.
Outputs from these components are aggregated and evaluated against CUE-defined constraints rather than fixed global rules. This allows risk scoring and enforcement decisions to be adjusted dynamically based on business context, such as deployment environment, data sensitivity, or regulatory requirements.
The current implementation focuses on a limited set of languages and package managers to enable controlled experimentation. Expanding ecosystem support and improving detection of novel or rapidly evolving attacks remain open challenges. Ethical considerations, particularly around runtime monitoring and developer trust, must also be addressed to avoid overly intrusive security controls.
This work builds on existing systems such as OWASP Dependency-Check and Grafeas, but differs in its use of constraint-based policy evaluation and an API-driven architecture designed for integration with CI/CD pipelines and governance systems. Because policies are expressed declaratively in CUE, they can be validated, composed, and updated in real time without redeploying analysis components.
As a possible extension, machine learning and large language models could be incorporated as supplementary analysis signals. Recent research suggests that language models can reason about code security properties and inconsistencies. In this context, they could be used to analyze code comments and documentation for mismatches between stated intent and observed behavior, providing additional input to the constraint evaluation layer.