Skip to main content


Who is behind this project?

Zarf was built by the developers at Defense Unicorns and an amazing community of contributors.

Defense Unicorns' mission is to advance freedom and independence globally through Free and Open Source software.

What license is Zarf under?

Zarf is under the Apache License 2.0. This is one of the most commonly used licenses for open-source software.

Is Zarf free to use?

Yes! Zarf is Free and Open-Source Software (FOSS). And will remain free forever. We believe Free and Open Source software changes the world and promotes freedom and security. Anyone who sees the value in our tool should be free to use it without fear of vendor locking or licensing fees.

Do I have to use Homebrew to install Zarf?

No, the Zarf binary and init package can be downloaded from the Releases Page. Zarf does not need to be installed or available to all users on the system, but it does need to be executable for the current user (i.e. chmod +x zarf for Linux/Mac).

What dependencies does Zarf have?

Zarf is statically compiled and written in Go and Rust, so it has no external dependencies. For Linux, Zarf can bring a Kubernetes cluster using K3s. For Mac and Windows, Zarf can leverage any available local or remote cluster the user has access to. Currently, the K3s installation Zarf performs does require a Systemd based system and root (not just sudo) access.

What is the Zarf Agent?

The Zarf Agent is a Kubernetes Mutating Webhook that is installed into the cluster during zarf init. The Agent is responsible for modifying Kubernetes PodSpec objects Image fields to point to the Zarf Registry. This allows the cluster to pull images from the Zarf Registry instead of the internet without having to modify the original image references. The Agent also modifies Flux GitRepository objects to point to the local Git Server.

Why doesn't the Zarf Agent create secrets it needs in the cluster?

During early discussions and subsequent decision to use a Mutating Webhook, we decided to not have the Agent create any secrets in the cluster. This is to avoid the Agent having to have more privileges than it needs as well as to avoid collisions with Helm. The Agent today simply responds to requests to patch PodSpec and GitRepository objects.

The Agent does not need to create any secrets in the cluster. Instead, during zarf init and zarf package deploy, secrets are automatically created as Helm Postrender Hook for any namespaces Zarf sees. If you have resources managed by Flux that are not in a namespace managed by Zarf, you can either create the secrets manually or include a manifest to create the namespace in your package and let Zarf create the secrets for you.

How can a Kubernetes resource be excluded from the Zarf Agent?

Resources can be excluded at the namespace or resources level by adding the ignore label.

What happens to resources that exist in the cluster before zarf init?

During the zarf init operation, the Zarf Agent will patch any existing namespaces with the ignore label to prevent the Agent from modifying any resources in that namespace. This is done because there is no way to guarantee the images used by pods in existing namespaces are available in the Zarf Registry.

If you would like to adopt pre-existing resources into a Zarf deployment you can use the --adopt-existing-resources flag on zarf package deploy to adopt those resources into the Helm Releases that Zarf manages (including namespaces). This will add the requisite annotations and labels to those resources and drop the ignore label from any namespaces specified by those resources.


Zarf will refuse to adopt the Kubernetes initial namespaces. It is recommended that you do not deploy resources into the default or kube-* namespaces with Zarf.

Additionally, when adopting resources, you should ensure that the namespaces you are adopting are dedicated to Zarf, or that you go back and manually add the ignore label to any non-Zarf managed resources in those namespaces (and ensure that updates to those resources do not strip that label) otherwise you may see ImagePullBackOff errors.

How can I improve the speed of loading large images from Docker on zarf package create?

Due to some limitations with how Docker provides access to local image layers, zarf package create has to rely on docker save under the hood which is very slow overall and also takes a long time to report progress. We experimented with many ways to improve this, but for now recommend leveraging a local docker registry to speed up the process.

This can be done by running a local registry and pushing the images to it before running zarf package create. This will allow zarf package create to pull the images from the local registry instead of Docker. This can also be combined with component actions and --registry-override to make the process automatic. Given an example image of registry.enterprise.corp/my-giant-image:v2 you could do something like this:

# Create a local registry
docker run -d -p 5000:5000 --restart=always --name registry registry:2

# Run the package create with a tag variable
zarf package create --registry-override registry.enterprise.corp=localhost:5000 --set IMG=my-giant-image:v2
kind: ZarfPackageConfig
name: giant-image-example

- name: main
# runs during "zarf package create"
# runs before the component is created
- cmd: 'docker tag registry.enterprise.corp/###ZARF_PKG_TMPL_IMG### localhost:5000/###ZARF_PKG_TMPL_IMG###'
- cmd: 'docker push localhost:5000/###ZARF_PKG_TMPL_IMG###'

- 'registry.enterprise.corp/###ZARF_PKG_TMPL_IMG###'

Can I pull in more than http(s) git repos on zarf package create?

Under the hood, Zarf uses go-git to perform git operations, but it can fallback to git located on the host and thus supports any of the git protocols available. All you need to use a different protocol is to specify the full URL for that particular repo:


In order for the fallback to work correctly you must have git version 2.14 or later in your path.

kind: ZarfPackageConfig
name: repo-schemes-example

- ssh://
- file:///home/zarf/workspace/zarf
- git://

In the airgap, Zarf with rewrite these URLs to match the scheme and host of the provided airgap git server.


When specifying other schemes in Zarf you must change the consuming side as well since Zarf will add a CRC hash of the URL to the repo name on the airgap side. This is to reduce the chance for collisions between repos with similar names. This means an example Flux GitRepository specification would look like this for the file:// based pull:

kind: GitRepository
name: podinfo
namespace: flux-system
interval: 30s
tag: 6.1.6
url: file:///home/zarf/workspace/podinfo

What is YOLO Mode and why would I use it?

YOLO Mode is a special package metadata designation that be added to a package prior to zarf package create to allow the package to be installed without the need for a zarf init operation. In most cases this will not be used, but it can be useful for testing or for environments that manage their own registries and Git servers completely outside of Zarf. This can also be used as a way to transition slowly to using Zarf without having to do a full migration.


Typically you should not deploy a Zarf package in YOLO mode if the cluster has already been initialized with Zarf. This could lead to an ImagePullBackOff if the resources in the package do not include the ignore label and are not already available in the Zarf Registry.

What is a skeleton Zarf Package?

A skeleton package is a bare-bones Zarf package definition alongside its associated local files and manifests that has been published to an OCI registry. These packages are intended for use with component composability to provide versioned imports for components that you wish to mix and match or modify with merge-overrides across multiple separate packages.

Skeleton packages have not been run through the zarf package create process yet, and thus do not have any remote resources included (no images, repos, or remote manifests and files) thereby retaining any create-time package configuration templates as they were defined in the original zarf.yaml (i.e. untemplated).