Deploy a Package
A typical Zarf deployment is made up of three parts:
- The
zarf
binary:- Is a statically compiled Go binary that can be run on any machine, server, or operating system with or without connectivity.
- Creates packages combining numerous types of software/updates into a single distributable package (while on a network capable of accessing them).
- Declaratively deploys package contents “into place” for use on production systems (while on an isolated network).
- A Zarf init package:
- A compressed tarball package that contains the configuration needed to instantiate an environment without connectivity.
- Automatically seeds your cluster with a container registry or wires up a pre-existing one
- Provides additional capabilities such as a git server and K3s cluster.
- A Zarf Package:
- A compressed tarball package that contains all of the files, manifests, source repositories, and images needed to deploy your infrastructure, application, and resources in a disconnected environment.
Zarf Packages are designed to be easily deployable on a variety of systems, including air-gapped systems. All of the necessary dependencies are included within the package, eliminating the need for outbound internet connectivity. When deploying the package onto a cluster, the dependencies contained in each component are automatically pushed into a Docker registry and/or Git server created by or known to Zarf on the air-gapped system.
Once the Zarf package has arrived in your target environment, run the zarf package deploy
command to deploy the package onto your Zarf initialized cluster. This command deploys the package’s capabilities into the target environment, including all external resources required for the package. The zarf.yaml
file included in the package will be used to orchestrate the deployment of the application according to the instructions provided.
The following diagram shows the order of operations for the zarf package deploy
command and the hook locations for actions.
Lifecycle Diagram
Zarf provides a few options that can provide control over how a deployment of a Zarf Package proceeds in a given environment. These are baked into a Zarf Package by a package creator and include:
- Package Variables - Templates resources with environment specific values such as domain names or secrets.
- Optional Components - Allows for components to be optionally chosen when they are needed for a subset of environments.
- Components Groups - Provides a choice of one component from a defined set of components in the same component group.
Zarf normally expects to operate against a Kubernetes cluster that has been Zarf initialized, but there are additional modes that can be configured by package creators including:
-
YOLO Mode - Yaml-OnLy Online mode allows for a faster deployment without requiring the
zarf init
command to be run beforehand. It can be useful for testing or for environments that manage their own registries and Git servers completely outside of Zarf. Given this mode does not use the Zarf Agent any resources specified will need to be manually modified for the environment. -
Cluster-less - Zarf normally interacts with clusters and kubernetes resources, but it is possible to have Zarf perform actions before a cluster exists (including deploying the cluster itself). These packages generally have more dependencies on the host or environment that they run within.
The general flow of a Zarf package deployment on an existing initialized cluster is as follows:
Zarf deploys resources in Kubernetes using Helm’s Go SDK, and converts manifests into Helm charts for installation.
If no existing Helm releases match a given chart in the cluster, Zarf executes a helm install
.
Should matching releases exist, a helm upgrade
is performed.
- CRDs are included during
helm install
to support Kubernetes Operator deployments - CRDs are excluded during
helm upgrade
due to Helm’s lack of support for upgrading CRDs
By default, Zarf waits for all resources to deploy successfully during install, upgrade, and rollback operations.
You can override this behavior during install and upgrade by setting the noWait: true
key under the charts
and manifests
fields.
After the Helm wait completes successfully, Zarf waits for all resources in the applied chart to fully reconcile. To identify when reconciliation is achieved, Zarf uses kstatus. Kstatus assesses whether a resource is reconciled by checking the status field. If a resource does not have a status field, kstatus considers it reconciled once it’s found.
The default timeout for Helm operations in Zarf is 15 minutes.
Use the --timeout
flag with zarf init
and zarf package deploy
to modify the timeout duration.
Zarf retries install and upgrade operations up to three times by default if an error occurs.
Use the --retries
flag with zarf init
and zarf package deploy
to change the number of retry attempts.
If attempts to upgrade a chart fail, Zarf tries to roll the chart back to its last successful release. During this rollback process:
- Any resources created during the failed upgrade attempt are deleted (
helm rollback --cleanup-on-fail
) - Resource updates are forced through delete and recreate if needed (
helm rollback --force
)