How to Set Up Pull Through from Chainguard's Registry to Amazon ECR
Overview of using Amazon ECR as a pull-through cache for Chainguard's registry.
For the complete documentation index, see llms.txt.
This page documents workflows for installing APK packages in distroless variants of Chainguard container images, such as most builds tagged :latest. We copy a filesystem from a distroless container image to a build image, install APKs to it using chroot, then copy the modified filesystem back to the distroless image in the final step.
The distroless variants of Chainguard Containers do not contain shells or package managers by design. This reduces attack surface and exploitability for these images. In cases where additional packages are required, we typically recommend the following:
:latest-dev in production. These Chainguard Containers are also low-to-zero CVE and are considered production-ready.However, we understand that there are specific use cases that require installation of system-level APK packages in distroless variants, such as maintaining internal build environments requiring application- or team-based packaging. In these cases, you may wish to implement the approach described in this document.
Why not just directly copy individual APK files to a distroless image? If you’re familiar with the package or packages that need to be installed to the distroless image, this may work. However, this approach has a number of disadvantages:
The chroot approach typically registers installed packages correctly, making them visible to scanners—one of the key reasons we recommend it.
Here are three options that you may be considering and our thoughts about them before we focus solely on chroot:
Note: If you’re working to a deadline, you can skip to this short appendix with full code example.
This workflow allows you to install APKs to a distroless image during a Dockerfile build by generating required software artifacts in a development environment, preparing a directory structure with runtime dependencies installed using chroot, then assembling these components back in the distroless image. The steps are as follows:
latest-dev) with the desired environment for building your software artifacts.In short, we pull a distroless image, replicate its file structure as a folder on the build image, install APKs to that file structure, build our artifacts on the build image, then put the customized file structure and artifacts into the distroless image.
In this example, we prepare a virtual environment that will support the Python module for MariaDB, create a customized distroless file structure with needed APKs, and assemble these components in a distroless image. Creating this virtual environment (installing from pip) requires specific APKs be available at both install time and runtime.
Note that, in this example, the same APKs are installed at install and runtime. In some cases, the packages required at install and runtime will not be the same. Many or most packages in interpreted language ecosystems like Python will only require system-level APKs to be present at runtime, and in these cases you can leave out adding APKs before installation. In the example case, system-level packages are required to install MariaDB using pip as well as at runtime.
Let’s get started.
run.py file that will import the _mysql object from the MySQLdb module and print the version:from MySQLdb import _mysql
print(_mysql.__version__)requirements.txt file with mysqlclient as the only listed Python dependency:mysqlclient# syntax=docker/dockerfile:1
FROM cgr.dev/chainguard/python:latest AS base
FROM cgr.dev/chainguard/python:latest-dev AS build
WORKDIR /app
USER root
RUN apk add --no-cache mariadb-connector-c-dev mariadb
USER 65532
RUN python -m venv venv
ENV PATH="/app/venv/bin":$PATH
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r /app/requirements.txt
USER root
COPY --from=base / /base-chroot
RUN mkdir -p /base-chroot
RUN apk add --no-cache --root /base-chroot mariadb-connector-c-dev mariadb
FROM cgr.dev/chainguard/python:latest
# Copy over the apks prep'ed at the end of the build stage (no apk-add in this image)
COPY --link --from=build /base-chroot /
WORKDIR /app
COPY --from=build /app/venv /app/venv
COPY run.py run.py
ENV PATH="/app/venv/bin:$PATH"
ENTRYPOINT ["python", "run.py"]First, we pull the distroless image we wish to customize. This image will be used both as a reference filesystem we will customize with our chosen APKs and later as the base image for our final assembly.
FROM cgr.dev/chainguard/python:latest AS base
Next, we pull our build image. The following steps will run in this environment.
FROM cgr.dev/chainguard/python:latest-dev AS build
Create a working directory:
WORKDIR /app
Now let’s add dependencies needed to install mysqlclient using pip. For many libraries, dependencies are needed only for runtime, so installing APKs at this stage is not needed. Note that we need root access to install APKs.
USER rootRUN apk add --no-cache mariadb-connector-c-dev mariadb
We now create a virtual environment, give this environment precedence on the path, and install Python dependencies, in this case only mysqlclient. Installation by pip takes place as a nonadministrative user.
USER 65532RUN python -m venv venvENV PATH="/app/venv/bin":$PATHCOPY requirements.txt /app/RUN pip install --no-cache-dir -r /app/requirements.txt
We have now created the software artifact that will be copied into our final distroless image, in this case a Python virtual environment set up to run MariaDB. Now we will create a customized file structure that uses the filesystem from our distroless image as a base and includes required additional APKs needed during runtime.
First, copy the full contents of the distroless image we pulled at the beginning (labeled “base”) to a folder on our build image:
USER rootCOPY --from=base / /base-chrootRUN mkdir -p /base-chroot
Now install APKs to this copied folder using chroot:
RUN apk add --no-cache --root /base-chroot mariadb-connector-c-dev mariadb
We’ve now prepared two components on our build image: the required software artifacts (a Python virtual environment) and a file structure customized with installed APKs needed for runtime. We will now assemble these components in the distroless image.
Switch to the distroless image:
FROM cgr.dev/chainguard/python:latest
Copy the customized folder structure we created on the build image, replacing the root of our distroless image:
COPY --link --from=build /base-chroot
Note here that we have chosen to use the --link flag. This adds an independent layer with the copied files and replaces the filesystem without reference to existing files, resulting in a more complete replacement. However, use of this flag can increase image size, so you may wish to experiment with disabling this flag in your build.
Next, we copy our virtual environment and run script:
WORKDIR /appCOPY --from=build /app/venv /app/venvCOPY run.py run.py
Set the path so that the virtual environment has precedence:
ENV PATH="/app/venv/bin:$PATH"
Finally, set the entrypoint:
ENTRYPOINT ["python", "run.py"]
run.py, and requirements.txt, build the image:docker build . --pull --no-cache -t mariadb-distroless
Here, --pull ensures we receive the latest images, even if one is locally stored, and --no-cache ensures that we receive the latest versions of packages.
docker run mariadb-distroless
The output should be the version of the MySQLdb module (from the output of run.py).
In this example, the work is done in our Dockerfile, which is presented in full here and then annotated below.
FROM cgr.dev/chainguard/glibc-dynamic:latest AS base
FROM cgr.dev/chainguard/gcc-glibc:latest-dev AS build
COPY <<EOF ./test.c
#include <stdio.h>
#include <curl/curl.h>
void main(){
printf("%s\\n", curl_version());
}
EOF
RUN apk add pc:libcurl
RUN gcc test.c `pkg-config --cflags --libs libcurl` -o dynamic-binary
COPY --from=base / /base-chroot
RUN apk add --root /base-chroot so:libcurl.so.4
FROM scratch
COPY --from=build /base-chroot /
COPY --from=build /work/dynamic-binary /usr/bin/dynamic-binary
ENTRYPOINT ["/usr/bin/dynamic-binary"]In this Dockerfile, we start by pulling a reference image as similar as possible to our desired runtime environment. In this case, we pull the distroless glibc-dynamic Chainguard Image suitable for running compiled C binaries.
FROM cgr.dev/chainguard/glibc-dynamic:latest AS base
Next, pull a build image with shell, APK, and needed toolchains for binary compilation or other build process:
FROM cgr.dev/chainguard/gcc-glibc:latest-dev AS build
Copy source file(s) for our desired artifact. This example will use a here document with a short example depending on libcurl.
COPY <<EOF ./test.c#include <stdio.h>#include <curl/curl.h>void main(){printf("%s\\n", curl_version());}EOF
Install any needed build-time dependencies using APK:
RUN apk add pc:libcurl
Build the binary:
RUN gcc test.c `pkg-config --cflags --libs libcurl` -o dynamic-binary
We now have the binary we’ll run in the final image. Next, we take steps to add runtime dependency APKs. We’ll first copy the entire filesystem of our reference distroless runtime image (here labeled “base”) pulled above to a directory on our build image. We will then install APKs to this directory. In the final step, this directory will be used as the filesystem in our assembled output image.
Copy the filesystem of our reference image (“base”)to a directory on our build image:
COPY --from=base / /base-chroot
Install APKs to the copied folder using chroot:
RUN apk add --root /base-chroot so:libcurl.so.4
We now have our needed components, the compiled software artifact (in this case, a binary depending on libcurl) and a directory structure customized without runtime dependencies. We will now assemble these components in scratch.
First, pull scratch:
FROM scratch AS custom-production-image
Copy the customized file structure to root on the scratch image (“custom-production-image”):
COPY --from=build /base-chroot /
Copy the binary:
COPY --from=build /work/dynamic-binary /usr/bin/dynamic-binary
In the last line of the Dockerfile, set the entrypoint:
ENTRYPOINT ["/usr/bin/dynamic-binary"]
We will now build the image from this Dockerfile:
docker build . --pull --no-cache -t dynamic-binary
Finally, run our image:
docker run dynamic-binary
You should see output similar to the following:
libcurl/8.12.1 OpenSSL/3.4.1 zlib/1.3.1 brotli/1.1.0 libpsl/0.21.5 nghttp2/1.64.0 OpenLDAP/2.6.9You have now created a production image based on a minimal distroless Chainguard Image customized with additional runtime dependencies as installed APKs.
Dockerfile:
# syntax=docker/dockerfile:1
FROM cgr.dev/chainguard/python:latest AS base
FROM cgr.dev/chainguard/python:latest-dev AS build
WORKDIR /app
USER root
RUN apk add --no-cache mariadb-connector-c-dev mariadb
USER 65532
RUN python -m venv venv
ENV PATH="/app/venv/bin":$PATH
COPY requirements.txt /app/
RUN pip install --no-cache-dir -r /app/requirements.txt
USER root
COPY --from=base / /base-chroot
RUN mkdir -p /base-chroot
RUN apk add --no-cache --root /base-chroot mariadb-connector-c-dev mariadb
FROM cgr.dev/chainguard/python:latest
# Copy over the apks prep'ed at the end of the build stage (no apk-add in this image)
COPY --link --from=build /base-chroot /
WORKDIR /app
COPY --from=build /app/venv /app/venv
COPY run.py run.py
ENV PATH="/app/venv/bin:$PATH"
ENTRYPOINT ["python", "run.py"]run.py:
from MySQLdb import _mysql
print(_mysql.__version__)Requirements.txt:
mysqlclientAnnotated Dockerfile:
# SPDX-FileCopyrightText: 2025 Chainguard, Inc.
# SPDX-License-Identifier: Apache-2.0
# Pull unmodified base image
FROM cgr.dev/chainguard/glibc-dynamic:latest AS base
# Pull build container with toolchains and apk
FROM cgr.dev/chainguard/gcc-glibc:latest-dev AS build
# Get workload source code
COPY <<EOF ./test.c
#include <stdio.h>
#include <curl/curl.h>
void main(){
printf("%s\\n", curl_version());
}
EOF
# Install build-time dependency
RUN apk add pc:libcurl
# Compile workload code
RUN gcc test.c `pkg-config --cflags --libs libcurl` -o dynamic-binary
# Copy base image contents to a subfolder
COPY --from=base / /base-chroot
# Customize base image chroot
RUN apk add --root /base-chroot so:libcurl.so.4
# Create customized production image from scratch
FROM scratch AS custom-production-image
# Copy customized base image
COPY --from=build /base-chroot /
# Copy workload binary
COPY --from=build /work/dynamic-binary /usr/bin/dynamic-binary
ENTRYPOINT ["/usr/bin/dynamic-binary"]Build the image:
docker build . --no-cache -t dynamic-binary
Run the image:
docker run dynamic-binary
If the build was successful, you should see version information from libcurl as output.t.
Last updated: 2026-04-21 00:00