In a past project, the customer utilized Prisma Cloud Compute for scanning running containers for known vulnerabilities (This is not endorsement of this particular software, just the one the customer decided to employ). In theory, it provided a detailed view of the container patch level within the organization. However, the end result was often one of two options:
- Application teams spent a significant amount of time tracking down and patching vulnerabilities, or suppressing them as false positives
- Application teams quickly started to ignore the tool’s output after realizing that many indicated vulnerabilites were false positives or not directly fixable (e.g. no patch available yet)
Irrespectively of whether the tool led to an overall improvement in the organization’s security stance, neither of those outcomes look desirable to me. Neither from the point of view of the organization, nor from the individual contributors. This led me to investigate means of reducing the amount of toil (and maybe improve other aspects as well).
Many of the vulnerabilities indicated by the security scanner were found in dependencies which got shipped in container images alongside the actual application code. Obviously, when you build and deploy software, you are responsible for keeping an eye on your direct dependencies, which are bundled by your language’s package manager and build process. However, once you package your application in a container image, you are now also responsible for everything else inside that image. This includes dependencies, often of crucial importance to your application, e.g. glibc
or TLS libraries. But there are usually lots of other binaries and libraries which aren’t used by your application but are nonetheless present in the base image. The majority of images I have seen used for shipping code contained OS utilities like chmod
, ps
, useradd
or even distribution package managers like apt
, yum
or apk
. Even if these binaries are not explicitly called by your application, they still add to the size of your image (and thus slow down build and deployment times). Since container registry providers (like AWS ECR) usually bill for stored image bytes, you also pay more just to retain larger images. Them being present also increases your attack surface by enabling an intruder to live off the land or use security vulnerabilities in these binaries to escalate privileges.
My personal analogy to these extraneous dependencies is a craftsman’s workshop cluttered with unused tools laying around. It’s easy to cut yourself on them or trip over them. And they obscure your view on what’s important for the job at hand. The best way of dealing with clutter is not having clutter in the first place. After the experience with this customer, it became increasingly clear to me that having less dependencies in our container images would be beneficial.
Reducing image dependencies
A crucial step towards the reduction of dependencies is utilizing multistage builds. This has been accepted as best practice and is broadly used, so I’ll keep it brief. The basic idea is that you have one or more helper stages in your builds. These helper stages are containers in which e.g. your software artifact is built, tested and packaged. Here, you can pull in all the tools and development dependencies you need. Once you have assembled your artifact, it is transferred to a final stage. This final stage is often a basic image containing just your execution environment, but not your build tools. Thus, your resulting image is often much smaller (and contains less dependencies) than with the alternative of building in one stage. Let’s take this single stage build of a basic Spring Boot application as an example:
FROM eclipse-temurin:17-jdk-jammy
RUN apt-get update \
&& apt-get install --no-install-recommends -y git \
&& git clone https://github.com/spring-guides/gs-spring-boot.git /tmp/git \
&& cd /tmp/git/initial \
&& ./mvnw package \
&& mv target/*.jar /opt/app.jar \
&& rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/* /root/.m2
ENTRYPOINT ["java", "-jar", "/opt/app.jar"]
As a multi-stage build, this might be formulated thus:
# Build using the full JDK image (which contains the Java compiler)
FROM eclipse-temurin:17-jdk-jammy as builder
RUN apt-get update \
&& apt-get install --no-install-recommends -y git \
&& git clone https://github.com/spring-guides/gs-spring-boot.git /tmp/git \
&& cd /tmp/git/initial \
&& ./mvnw package \
&& mv target/*.jar /opt/app.jar
# we don't necessarily need to clean up here, as the layer will be discarded
# Deploy only the JRE image
FROM eclipse-temurin:17-jre-jammy
COPY --from=builder /opt/app.jar /opt/
ENTRYPOINT ["java", "-jar", "/opt/app.jar"]
Let’s compare these solutions. Obviously, both result in a viable jar and execute the application. But the resulting image size significantly differs:
# image size
docker images | grep -E 'TAG|example'
# (some columns omitted for clarity)
REPOSITORY TAG SIZE
example multi-stage 275MB
example single-stage 490MB
# installed packages
function package_count() {
docker run --rm -it --entrypoint grep $1 --count '^Package: ' /var/lib/dpkg/status
}
package_count example:multi-stage
126
package_count example:single-stage
141
This is obviously an improvement, but there is still lots of potential here. After all, the final image still contains 126 debian packages, including the package manager itself. Once our image is being deployed, we will likely never need the package manager. However, it’s not like you can simply uninstall the package manager or the shell from a distro base image. However, there’s a way.
Enter distroless
This is where distroless comes into play. The term is slightly misleading, as it’s not really about having an image without a Linux distribution inside it (as when using FROM scratch
). The concept derives its name from a Google project. The goal of distroless container images is simply to reduce the number of dependencies usually introduced by “full” distribution base images. So in a sense, distroless images contain less distro, but they’re not distroless. The most basic distroless
image (gcr.io/distroless/static
) contains:
ca-certificates
- A
/etc/passwd
entry for a root user - A
/tmp
directory tzdata
Notice that even libraries like glibc
are absent. This would be a suitable base image for a statically linked binary (e.g. built from Go). Even for statically linked binaries, you usually need an image containing the above elements for them to work. CA certificates are necessary if you want to communicate with HTTPs endpoints, /etc/passwd
is required if you want to run your application as a non-root user. Application frameworks often implicitly assume the existence of a /tmp
directory, and tzdata
is necessary to handle timezones correctly in your application. There’s also gcr.io/distroless/base
, which adds glibc
and libssl
. And then there are dedicated images for popular language runtimes (Java, Node and Python), which are built on top of the base
image.
How would you use this?
Let’s take the gcr.io/distroless/java17-debian12
image as an example. Since this image only contains the Java runtime, it is not suitable for compiling jars. It doesn’t contain the compiler, let alone any build tool like mvn
or gradle
. It is however entirely suitable for using as the final stage of a multistage build:
FROM eclipse-temurin:17-jdk-jammy as builder
RUN apt-get update \
&& apt-get install --no-install-recommends -y git \
&& git clone https://github.com/spring-guides/gs-spring-boot.git /tmp/git \
&& cd /tmp/git/initial \
&& ./mvnw package \
&& mv target/*.jar /opt/app.jar
# we don't necessarily need to clean up here, as the layer will be discarded
FROM gcr.io/distroless/java17-debian12:nonroot
COPY --from=builder /opt/app.jar /opt/
ENTRYPOINT ["java", "-jar", "/opt/app.jar"]
Lets compare the result of building this to the previous example:
# image size
docker images | grep -E 'TAG|example'
# (some columns omitted for clarity)
REPOSITORY TAG SIZE
example multi-stage 275MB
example single-stage 490MB
example distroless 245MB
# installed packages
# we don't even have ls and grep in the container, so we need
# to extract the file system
# create a container from the distroless image
docker create --name distroless example:distroless
# export the file system to a tar file
docker export distroless -o distroless.tar
tar --extract --file distroless.tar
# look at the contents of /var/lib/dpkg/status.d, which contains two files
# for each installed package (metadata file and a hash file with suffix .md5sums)
ls var/lib/dpkg/status.d | fgrep --count --invert-match .md5sums
23
When comparing the size, we can see that we have less overhead to ship in this image than before. This likely means faster pulls and pushes (in the absence of layer caching), and somewhat reduced attack surface. And if you are running container/image scanning tools in your environment, they will have less to complain about. We only have 23 packages installed, as opposed to 126 and 141, respectively. This means less time dealing with false alarms, and more time focusing on your actual business. You also get an increased likelihood that if the scanning tool indicates a vulnerability in your image, it’s something you actually should adress. Alert fatigue is something we all want to avoid.
Speaking of vulnerabilities, I did a quick local scan using the image security scanner trivy. Caveats: this result is just a brief snapshot of the state on the given day I ran this test. Also, the eclipse-temurin
images are based on Ubuntu, while the distroless
images are based on Debian, which muddles the outcome a bit. Therefore, I also included a debian:12-slim
image in the results below, just to give further context.
function vulns() {
trivy image --scanners vuln --format json --quiet $1 | jq -r '.Results[] | (.Vulnerabilities // []) | .[] | "\(.Severity)"' | sort | uniq --count
}
vulns example:single-stage
71 LOW
9 MEDIUM
vulns example:multistage
39 LOW
9 MEDIUM
vulns example:distroless
1 CRITICAL
2 HIGH
14 LOW
2 MEDIUM
vulns debian:12-slim
1 CRITICAL
5 HIGH
57 LOW
15 MEDIUM
On this given day, the distroless
image indeed contained a critical vulnerability in zlib1g
(marked WONTFIX by Debian) and two high vulnerabilities. These came from the upstream Debian 12 distribution, which is why the Ubuntu-based eclipse-temurin
image didn’t contain it. Apart from this, we can still see some overall good trends:
- we have much fewer vulnerabilities listed than the
debian:12-slim
image in every severity apart from thezlib1g
one - we have much fewer overall findings to deal with than both the single-stage and multi-stage images
Drawbacks of distroless images
Despite the potential benefits of using distroless images, there are also some potentially detracting factors. It can certainly be argued that the additional attack surface from binaries in your image isn’t all that large, especially when compared to that of the dependencies included in your regular code path. And it is also true that a reduction in base image size might not always significantly speed up your container pulls (and hence deployments). If your organization utilizes the same base image across the board, the base image layers are likely already present on the execution host and therefore don’t need to be pulled during deployments.
Also, given their stripped-down nature, distroless images are usually pretty impractical for CI jobs. For these, I personally still use small, but still full-fledged images tailored to my needs.
Even though I often utilize distroless images, I have encountered two further factors which might make distroless inconvenient. Firstly, the lack of a shell and common tools in the image impedes debugging. Secondy, Google’s distroless images are hard to customize due to their lack of a package manager.
Debugging distroless containers
It should be noted that exec
ing into your production containers is not a good habit to have, and that it might even be prohibited by your execution environment (due to restrictive Kubernetes access control, for example). If you want to run a shell session in your container, you could still use the gcr.io/distroless
images, but utilize the debug
tags, such as gcr.io/distroless/java17-debian12:debug
or gcr.io/distroless/java17-debian12:debug-nonroot
. They contain a busybox
for you to connect to (while at the same time reducing the usefulness of going distroless, of course).
If you’re running your containers on Kubernetes, you might rather employ ephemeral debug containers. This way, you can spin up a fat image with your debugging tools whenever you need it. This container then shares its process namespace with your debugging target. As a result, you can see the processes running inside your distroless production container. And because you have a session inside your debugging container, you also have access to all tools in its image.
Customizing distroless base images
Sometimes, your application might need additional libraries which are not present in the distroless images. Maybe you need ffmpeg
in your final image, for example. However, it’s not like you can just apt install
additional dependencies during your image build, because you lack the package manager. There are a few options to solve this, like customizing and then building the images yourself (using Bazel), or by unpacking and copying the content of the necessary Debian packages and their dependencies (which obviates most benefits of going distroless, since this will again pull in most OS dependencies which would be present in a regular base image).
A pretty recent contender in the field which promises improvements here is chainguard. Chainguard provide (and sell) distroless images based on their own Linux distribution Wolfi. These images are built using open-source tooling apko and melange, which makes it very easy to build your own customized distroless images. Images built using apko are limited to using either Alpine or the aforementioned Wolfi as an upstream distro, though. So you either have to use a musl-based distro (musl is an alternative to glibc and has its share of quirks), or rely on a pretty novel distro. Still, I think this is a great step in the right direction, and I’m looking forward to developments in this area.
Kommentare