# AI with Hardened Container Images

URL: https://edu.chainguard.dev/software-security/learning-labs/ll202507.md
Last Modified: July 25, 2025
Tags: Learning Labs, Chainguard Containers, AI

Learning Lab for July 2025 on securing AI workloads with hardened container images

The July 2025 Learning Lab with Patrick Smyth covers AI with Hardened Container Images. In this session, learn how to secure AI workloads by reducing vulnerabilities in container images by over 90%. Patrick demonstrates hands-on techniques for training an animal detection model using PyTorch with hardened container images, creating minimal and secure deployments, and running AI frameworks with zero CVEs.
Sections 0:00 Introduction and updates 2:02 Preparation: Docker pull instructions for demo 3:39 Chainguard! Who are we? 4:34 CVE system fundamentals 6:48 &ldquo;Boss assigned me to fix Ubuntu&rdquo; problem 7:41 Introduction to Chainguard Containers 8:54 Zero CVE containers: Real results and comparisons 11:10 How we achieve zero CVEs: Minimal, Fresh, Advisory, Patch 13:24 AI container challenges: Size and complexity 14:59 PyTorch container analysis: CVEs, packages, and executables 16:21 Demo introduction: Image classification with PyTorch 17:59 Demo walkthrough and repository overview 19:28 Demo: Running the training command 22:01 Demo: Downloading test image and running inference 23:20 Recent developments in Chainguard AI containers 25:09 Other AI containers: TensorFlow, KServe, Triton backends 26:46 Q&amp;A 35:18 Chainguard AI course and additional resources Demo In the demo, Patrick trains and runs inference on an image classification model using PyTorch and Chainguard&rsquo;s hardened container image. The model classifies images of octopuses, whales, and penguins, demonstrating how to work with AI workloads securely.
Demo Repository: PyTorch Getting Started
Training the Model First, create a directory for the project and download the necessary files:
mkdir -p ~/image_classification &amp;&amp; cd ~/image_classification &amp;&amp; \ curl https://codeload.github.com/chainguard-dev/pytorch-getting-started/tar.gz/main | \ tar -xz --strip=1 pytorch-getting-started-main/Then run the training script inside a Chainguard PyTorch container:
docker run --user root --rm -it \ --platform linux/amd64 \ -v &#34;$PWD/:/home/nonroot/octopus-detector&#34; \ cgr.dev/chainguard/pytorch:latest \ &#34;/home/nonroot/octopus-detector/image_classification.py&#34;This command generates a model file named octopus_whale_penguin_model.pt.
Running Inference To test the trained model, first download a test image:
curl https://raw.githubusercontent.com/chainguard-dev/pytorch-getting-started/main/inference-images/octopus.jpg &gt; ~/image_classification/octopus.jpgThen run the classification:
cd ~/image_classification &amp;&amp; \ docker run --user root --rm -it \ --platform linux/amd64 \ -v &#34;$PWD:/home/nonroot/octopus-detector&#34; \ cgr.dev/chainguard/pytorch:latest \ &#34;/home/nonroot/octopus-detector/image_classification.py&#34; \ &#34;/home/nonroot/octopus-detector/octopus.jpg&#34;The demo showcases how Chainguard&rsquo;s hardened PyTorch image provides the same functionality as traditional images while eliminating vulnerabilities and reducing the attack surface.
Resources Slide deck Demo repository Chainguard AI/ML Supply Chain Security Course Getting Started with the PyTorch Chainguard Container PyTorch Container Overview Beyond Zero: Eliminating Vulnerabilities in PyTorch Container Images (PyTorch 2024) 
