Open Component Model in Production: Building Software Bills of Delivery for Cloud-Native Supply Chains
The Open Component Model (OCM) represents a fundamental shift in how we approach software supply chain security. While most organizations struggle with visibility into their distributed systems’ dependencies, OCM provides an open standard for creating comprehensive Software Bills of Delivery (SBOD) that capture everything from container images to configuration files, signatures, and version constraints across your entire delivery pipeline.
Unlike traditional software bills of materials that focus on source dependencies, OCM tracks the actual artifacts you deliver to production. This distinction matters when you’re managing complex cloud-native applications where the gap between what you build and what you deploy can introduce significant security risks.
What Makes OCM Different
The Open Component Model specification defines OCM as “an open standard to describe software-bill-of-deliveries (SBOD)” that is “technology-agnostic and machine-readable format focused on the software artifacts that must be delivered for software products.”
This focus on delivery artifacts rather than source code dependencies addresses a critical gap in most supply chain security approaches. When you deploy a microservices application, you’re not just shipping your application code – you’re delivering container images, Helm charts, configuration files, certificates, and often third-party components. OCM captures all of these elements with their relationships and provenance.
The model organizes everything around components and component versions. A component represents a logical unit of software (think: a microservice, a library, or an entire application), while component versions represent specific, immutable releases of that component. Each component version contains:
- Resources: The actual artifacts you deliver (container images, binaries, charts)
- Sources: References to source code and build information
- References: Dependencies on other components
- Signatures: Cryptographic proof of authenticity and integrity
Building Software Bills of Delivery
Creating effective software bills of delivery with OCM starts with the OCM CLI, which provides the primary interface for interacting with OCM elements. The CLI helps you “create component versions and embed them in CI and CD processes.”
To get started with the CLI, you can install it using the official installer:
curl -sfL https://ocm.software/install-cli.sh | bash
The CLI also supports multiple installation methods including Nix. According to the repository documentation, you can use Nix for ad-hoc execution or permanent installation:
# ad-hoc cmd execution
nix run github:open-component-model/ocm -- --help
# install development version
nix profile install github:open-component-model/ocm
Note: The main OCM project is currently marked as “Work In Progress” with the warning that “expect heavy changes, especially in the Library API.” The team is working on a stable API, so consider this when planning production deployments.
The CLI operates on component descriptors – JSON or YAML files that define your component versions. These descriptors capture not just what you’re delivering, but how it relates to other components and where it came from.
Repository Mappings and Storage
OCM supports multiple storage backends through its repository mapping system. The current implementation supports:
- OCI repositories: Using the repository prefix path of an OCI repository to implement an OCM repository
- CTF (Common Transport Format): File-based binding for representing component versions as filesystem content (directory, tar, tgz)
This flexibility means you can store your software bills of delivery alongside your container images in existing OCI registries, or package them as portable files for air-gapped environments. The OCI mapping is particularly powerful because it leverages existing registry infrastructure while adding OCM’s metadata layer.
Cryptographic Signing and Verification
Supply chain security requires more than just tracking – you need cryptographic proof that artifacts haven’t been tampered with. OCM provides built-in signing and verification capabilities that work across all supported repository implementations.
The signing process captures not just individual artifacts, but the entire component version including its relationships. This means you can verify not only that a container image is authentic, but that its configuration, dependencies, and metadata are also untampered.
OCM’s approach to signing addresses a common problem in cloud-native environments: how do you verify the integrity of complex, multi-artifact deployments? Traditional approaches might sign individual container images, but OCM signs the complete delivery package.
Automated Deployment with OCM Controllers
For production deployments, manual CLI operations don’t scale. The OCM Controllers are “designed to enable the automated deployment of software using the Open Component Model and Flux.”
The OCM K8s Toolkit provides a Kubernetes operator that deploys OCM resources into your cluster. You can install it using the provided Helm chart:
helm install ocm-k8s-toolkit oci://ghcr.io/open-component-model/kubernetes/controller/chart \
--namespace ocm-k8s-toolkit-system \
--create-namespace
The controllers integrate with GitOps workflows, particularly when combined with FluxCD. This integration enables deploying Helm charts or Kustomizations from OCM resources while maintaining full traceability from source to deployment.
Cross-Environment Transport
One of OCM’s most powerful features is its ability to transport component versions across different environments while maintaining integrity and traceability. This capability is essential for organizations that need to move software between development, staging, and production environments, or across different cloud providers.
The transport mechanism works at the component version level, meaning you can move entire applications with all their dependencies and metadata intact. This includes scenarios like:
- Promoting applications from staging to production
- Deploying to air-gapped environments
- Moving workloads between cloud providers
- Disaster recovery scenarios
OCM’s transport preserves signatures and verification chains, so you can prove that what you’re deploying in production is exactly what was tested and approved in your staging environment.
Integration Patterns
OCM’s design philosophy emphasizes integration with existing toolchains rather than replacement. The model provides a common language that different tools can use to exchange information about software artifacts.
For example, your CI pipeline might use OCM to package build artifacts with their metadata, your security scanning tools might add vulnerability information as OCM labels, and your deployment tools might consume OCM component versions to understand what they’re deploying.
This approach allows you to build comprehensive supply chain tracking without replacing your entire toolchain. OCM acts as the integration layer that connects your existing tools with consistent metadata and provenance tracking.
Production Considerations
When implementing OCM in production, several factors require careful consideration:
Repository Strategy: Choose between OCI-based storage for integration with existing registries, or CTF for scenarios requiring file-based transport. Many organizations use OCI repositories for active development and CTF for archival or air-gapped deployments.
Signing Infrastructure: Establish clear policies for who can sign component versions and how signing keys are managed. OCM supports both public key and certificate-based verification, allowing integration with existing PKI infrastructure.
Automation Integration: Plan how OCM fits into your existing CI/CD pipelines. The CLI can be embedded in build processes, while the controllers handle deployment automation.
Governance and Compliance: OCM’s comprehensive metadata capture supports compliance requirements, but you need policies defining what information to capture and how to use it for audit purposes.
The Future of Software Supply Chain Security
OCM represents a maturing approach to supply chain security that goes beyond simple dependency scanning. By focusing on delivery artifacts and their relationships, it provides visibility into what actually gets deployed to production environments.
The project’s commitment to open standards and integration with existing tools makes it a practical choice for organizations serious about supply chain security. As cloud-native environments become more complex, having a standardized way to track, sign, and verify entire application deployments becomes essential.
For teams building distributed systems, OCM offers a path to comprehensive supply chain visibility without requiring wholesale changes to existing development and deployment processes. The key is starting with clear goals for what you want to track and verify, then building OCM integration incrementally into your existing workflows.
The combination of standardized metadata, cryptographic verification, and cross-environment transport capabilities positions OCM as a foundational technology for secure software delivery in cloud-native environments.