Authors: Joel Zhou & Michael Troutman
—
In a Kubernetes cluster that has persistent storage managed by LINSTOR®, you bring high availability (HA) and data locality features to the persistent data that your applications and services rely on. To complement a running-in-production or enterprise Kubernetes deployment involving persistent data, you will likely want an off-site backup solution. This is important for a variety of reasons but mitigating the effects of a local cluster disaster or having a potential migration tool are two valuable uses for persistent storage backups.
This tutorial will describe how to use Velero to back up and restore a Kubernetes cluster namespace deployment, including its persistent storage data, to and from a remote S3 bucket. You can configure LINSTOR in Kubernetes to automatically manage storage volume snapshots that you can take when using LVM or ZFS thin-provisioned volumes to back persistently stored data that LINSTOR manages. Velero is an open source tool that you can use to backup and restore these snapshots. It even has a feature to automatically create and ship backups on a schedule.
For an example use case, you will use this backup and restore solution to migrate a Kubernetes NGINX deployment from one LINSTOR in Kubernetes cluster to a second LINSTOR in Kubernetes cluster. You can use the instructions in this article either for your own migration purposes, or to create a disaster recovery solution, to name two potential uses.
Prerequisites
To successfully use the instructions in this tutorial, you will need to have:
-
An available S3-compatible object storage bucket. This article uses AWS S3 but you might be able to use other S3-compatible storage providers, for example, MinIO, RustFS, or Storj. See the AWS S3 compatible storage providers list in Velero documentation for more information. If you are using an AWS S3 bucket, you need to create an IAM user namedÂ
velero and give the user proper permissions on the bucket. Refer to instructions on the AWS plugin page on GitHub. -
Two running Kubernetes clusters with LINSTOR installed in both. If you have not installed LINSTOR yet, you can follow the instructions in the Integrating LINBIT® SDS with Kubernetes, a Quick Start Guide and repeat for your second cluster.
-
In both clusters, a storage back end that supports a “snapshot” feature, for example, an LVM thin pool or a thin-provisioned ZFS pool.
âť—IMPORTANT:Â The storage back end must be the same on both clusters, because a snapshot can only be restored to the same type of back end that it was created from.
-
An enabled snapshot feature that is supported by the Kubernetes Container Storage Interface (CSI) driver.
You can verify this by using the following command to show yourÂ
CustomResourceDefinition API resources (CRDs):kubectl api-resources --api-group=snapshot.storage.k8s.io -onameIf you do not get any output from this command showing that these CRDs were already created, you will need to install them into your Kubernetes cluster. You can install the necessary CRDs by entering the following command.
kubectl apply -k https://github.com/kubernetes-csi/external-snapshotter//client/config/crdOutput from entering theÂ
kubectl api-resources command shown earlier should show the necessary CRDs:volumesnapshotclasses.snapshot.storage.k8s.io volumesnapshotcontents.snapshot.storage.k8s.io volumesnapshots.snapshot.storage.k8s.io -
A Kubernetes Container Storage Interface (CSI) Snapshotter installed in your cluster.
To verify that your Kubernetes cluster has a snapshotter installed, enter the following command:
kubectl get pods -A | grep snapshot-controllerIf you need to install a snapshot controller, enter the following command:
kubectl apply -k https://github.com/kubernetes-csi/external-snapshotter//deploy/kubernetes/snapshot-controllerFor compatibility with the Kubernetes Container Storage Interface (CSI) Snapshotter, version 8.5.0 (latest version at time of writing), use Kubernetes version >= 1.25. Check the CSI Snapshotter releases page for compatibility requirements for the version you are using.
Installing Velero
After confirming that your clusters meet the prerequisites, you can install Velero in both clusters. You will need an S3 bucket available for your Velero backup and restore sets. This tutorial uses an AWS S3 bucket called linbit-velero-backup.
❗IMPORTANT: If you are creating a new AWS S3 bucket to use with the instructions in this article, your S3 bucket name must be globally unique across all of AWS S3 space. See bucket naming rules in AWS documentation for more details.
Create a secret file that has an AWS access key ID and an AWS secret access key for a user with access credentials to your S3 bucket. Use a text editor to create the file /tmp/s3.secret with the following contents:
[default]
aws_access_key_id=<your_S3_access_key_id>
aws_secret_access_key=<your_S3_secret_access_key>
âť—IMPORTANT:Â ReplaceÂ
<your_S3_access_key_id>Â andÂ<your_S3_secret_access_key>Â with your S3 credentials.
Downloading Velero
Find the appropriate version of Velero for your version of Kubernetes, then download the Velero tar archive file from the official Velero project repository. This tutorial uses Kubernetes version 1.35.1 and Velero version 1.17.2. You can find the Velero compatibility table in the Velero project repository.
VELEROVERSION=v1.17.2
curl -fsSL -o velero-"$VELEROVERSION"-linux-amd64.tar.gz \
https://github.com/vmware-tanzu/velero/releases/download/"$VELEROVERSION"/velero-"$VELEROVERSION"-linux-amd64.tar.gz
After verifying the checksum of the downloaded tar file, extract (tar xvf) the tar file contents, and copy the extracted velero executable file into a directory in your $PATH.
Installing Velero in Kubernetes
Next, create environment variables for your S3 bucket name, your S3 region, and the version numbers of the Velero plugin for AWS plugin that is compatible with your version of Velero. This article uses version 1.13.2 of the Velero plugin for AWS, for use with Velero version 1.17.2. You can find compatibility information for the plugin at its GitHub page:Â Velero plugin for AWS.
S3BUCKET=linbit-velero-backup
S3REGION=us-east-2
AWSPLUGINVERS=v1.13.2
Then, install Velero by entering the following command:
velero install \
--features=EnableCSI \
--use-volume-snapshots \
--use-node-agent \
--plugins=velero/velero-plugin-for-aws:"$AWSPLUGINVERS" \
--provider aws --bucket "$S3BUCKET" \
--secret-file /tmp/s3.secret \
--backup-location-config region="$S3REGION" \
--snapshot-location-config region="$S3REGION"
âť—IMPORTANT:Â TheÂ
--use-node-agent option in theÂvelero install command is needed for cross-cluster restore capability with Velero. Without an installed node agent, Velero backups might appear to succeed but you will not be able to restore them to your second cluster.
Output from the command will show Velero resources being created and eventually show confirmation of a successful deployment and installation:
[...]
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
After a while, entering a kubectl get pods -n velero command will show that a new Velero server pod, named velero, is up and running. If you need to, you can verify the status of the Velero deployment with the following command:
kubectl logs -f deployment/velero -n velero
Configuring Kubernetes for Velero
To prepare your Kubernetes environment for Velero backups, you need to create:
- A storage class for your Kubernetes application’s persistent storage
- A volume snapshot class for provisioning volume snapshots
Creating a LINSTOR storage class for application persistent data
First, you will need to create a new storage class within the LINSTOR CSI provisioner. Your Kubernetes application pod will use this storage class for its persistent data. Create a YAML configuration file, /tmp/01_storage_class.yaml, that will describe the storage class.
cat << EOF > /tmp/01_storage_class.yaml
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linbit-storageclass
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "2"
storagePool: lvm-thin
EOF
âť—IMPORTANT:Â TheÂ
storagePool key value pair should reference an existing LINSTOR thin-provisioned LVM or ZFS storage pool. If you need to verify what storage pools are available to LINSTOR, you can enter the following command:kubectl exec deployment/linstor-controller -n linbit-sds -- \ linstor storage-pool listIf you install theÂ
kubectl-linstor CLI plugin, you can simply enter:kubectl linstor storage-pool list
Next, apply this YAML configuration to your Kubernetes cluster.
kubectl apply -f /tmp/01_storage_class.yaml
Output from the command will show that the storage class was created:
storageclass.storage.k8s.io/linbit-storageclass created
You can verify that the storage class exists by entering the following command:
kubectl get storageclasses
Output from the command will show a storage class for each LINSTOR satellite pod in your Kubernetes deployment.
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
linbit-storageclass linstor.csi.linbit.com Delete Immediate false 37s
[...]
Creating a volume snapshot class
Next, create a volume snapshot class for provisioning volume snapshots that Velero will use. Create a YAML file, /tmp/02_snapshot_class.yaml, that will describe the volume snapshot class. Replace values, such as the endpoint, region, remote name, and bucket name, as you might need to, depending on your environment.
cat << EOF > /tmp/02_snapshot_class.yaml
---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: linstor-csi-snapshot-class
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: linstor.csi.linbit.com
deletionPolicy: Retain
EOF
Next, apply this configuration file to your Kubernetes cluster:
kubectl apply -f /tmp/02_snapshot_class.yaml
Output from the command will show that your volume snapshot class was created:
volumesnapshotclass.snapshot.storage.k8s.io/linstor-csi-snapshot-class-s3 created
If you have arrived here from the future and are applying the instructions in this section to your second Kubernetes cluster, you can return to your place in the tutorial by following this link.
Configuring Velero and installing an example pod
With Velero installed into your Kubernetes cluster, you can configure it and test its backup and restore capabilities by creating an example pod. In this guide, you will create an NGINX deployment.
Enabling the CSI feature on the Velero client configuration
To prepare to use Velero to back up your Kubernetes metadata, you enabled the Velero CSI feature on the Velero “server” pod, by using the --features flag in your velero install command. You also need to enable this feature within the Velero client, by entering the following command.
velero client config set features=EnableCSI
You can verify that CSI feature is enabled by entering the following command:
velero client config get features
Output from the command will show the EnableCSI feature.
features: EnableCSI
Creating an example pod
In this tutorial, you will create a basic NGINX pod to test Velero backup and restore capabilities.
Changing the Kubernetes namespace
First, create a new Kubernetes namespace for testing Velero.
kubectl create namespace application-space
Next, change your context to the application-space namespace, so that subsequent kubectl commands will happen within the application-space namespace:
kubectl config set-context --current --namespace application-space
Creating a persistent volume claim
To have LINSTOR manage your example pod’s persistent storage, you will need to create a persistent volume claim (PVC) that references the linbit storage class that you created earlier. You can do this by first creating a YAML configuration file for the persistent volume claim.
cat << EOF > /tmp/03_pvc.yaml
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: application-pvc
namespace: application-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: linbit-storageclass
EOF
Next, apply the configuration to your Kubernetes cluster.
kubectl apply -f /tmp/03_pvc.yaml
Output from the command will show that the PVC was created:
persistentvolumeclaim/application-pvc created
You can also verify the deployment of your PVC by using a kubectl get command:
kubectl get persistentvolumeclaims
Output from the command will show the PVC that you created:
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
application-pvc Bound pvc-[...] 1Gi RWO linbit-storageclass 23s
Creating an example pod
With a PVC created, you can create an example NGINX pod that has its data directory (/var/lib/nginx) on the PVC. First, create the pod’s YAML configuration file.
cat << EOF > /tmp/04_nginx.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: application-space
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
ports:
- containerPort: 80
name: nginx-webserver
volumeMounts:
- name: nginx-persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-persistent-storage
persistentVolumeClaim:
claimName: application-pvc
EOF
Next, apply the configuration to your Kubernetes cluster:
kubectl apply -f /tmp/04_nginx.yaml
Output from the command will show that your NGINX pod was created:
deployment.apps/nginx created
After some time, your example pod will come up within the application-space namespace. You can confirm that it is up and running by using a basic kubectl get pods command.
kubectl get pods
Eventually, output from the command will show your pod in a running state:
NAME READY STATUS RESTARTS AGE
nginx-[...] 1/1 Running 0 63s
You can also watch your pod come up in real time, by using the following command:
watch kubectl get pods
Testing Velero backup and restore functions
With Velero installed and configured and an example pod up and running, you can next test using Velero to backup your application namespace to an S3 bucket.
Create a unique reference file and webpage within your NGINX pod
For testing Velero backup and restore capabilities, first create a unique reference file in the NGINX data directory of your pod. You can check for this reference file after restoring your backup. This will confirm that Velero backed up and restored from a snapshot of your application’s LINSTOR-managed persistent storage volume.
Borrowing an example from the Velero documentation, log in to your NGINX example pod:
kubectl exec -it deployment/nginx -- /bin/bash
Next, create a unique file:
random=$(( RANDOM % 45 + 55 ))
for i in $(seq 1 "$random"); do \
echo -n "LINBIT " >> /usr/share/nginx/html/velero.testing.file
done
Next, get the checksum of the new file, for verification purposes later:
cksum /usr/share/nginx/html/velero.testing.file
Output from the command will show something similar to the following:
587418795 511 /usr/share/nginx/html/velero.testing.file
Next, create a simple webpage that references your unique file:
cat << EOF > /usr/share/nginx/html/index.html
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file checksum: $(cksum /usr/share/nginx/html/velero.testing.file)</p>
</body>
</html>
EOF
After creating a simple webpage, return to your Kubernetes controller node by exiting your NGINX pod:
exit
Exposing the NGINX pod HTTP port
After your NGINX pod is up and running, expose its HTTP port (80) to your host system by mapping the internal port of the pod to an available open port on your host system. You can use the NodePort service to do this quickly. If you are working in a production environment however, you would likely want to use the Kubernetes Gateway API or an Ingress resource and Ingress controller, or for this task.
If port 80 is available on your host system and you are working in a test environment, you can just map your port 80 in your NGINX pod to it by using the following command:
kubectl create service nodeport nginx --tcp=80:80
Output from the command will show that your service was created:
service/nginx created
Verifying your example webpage
You can verify that the Kubernetes NodePort service has exposed your NGINX pod on your host system by first getting the nginx nodeport service’s mapped port:
kubectl get service
Output from the command will list the nginx NodePort service and show that the service has mapped port 80 in the NGINX pod to a port on your host system:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.110.25.255 <none> 80:30433/TCP 1d
Here, port 80 in the NGINX pod is mapped to port 30433 on the host system. You can next use the curl command to verify the NGINX pod is serving the webpage that you created earlier.
curl localhost:30433
Output from the curl command will show the webpage that you created earlier:
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file checksum: 587418795 511 /usr/share/nginx/html/velero.testing.file</p>
</body>
</html>
You are now ready to back up your namespace deployment to an S3 remote.
Backing up your namespace deployment to an S3 remote
Create an AWS S3 stored backup of your application’s Kubernetes deployment metadata by using the following Velero command:
velero backup create backup01 \
--include-namespaces application-space \
--snapshot-move-data=true \
--wait
📝 NOTE: TheÂ
--snapshot-move-data=true here uses the Velero node agent and data mover to backup the application namespace to S3 storage in a way that lets you restore the backup to a different Kubernetes cluster.
Output from the command will be similar to the following:
Backup request "backup01" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
............
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup01` and `velero backup logs backup01`.
📝 NOTE: Because you configured LINSTOR to back the storage class underlying the application PVC, LINSTOR, by virtue of the CSI driver, will automatically take a volume snapshot as part of the Velero backup.
After some time, the backup will finish and you will find a backups/backup01 folder within your S3 bucket and within that folder, archives of your deployment metadata and volume snapshot contents.
Next, verify that your Velero backup includes a volume snapshot:
velero describe backups backup01 --details
Output from the command will show the backup that you created:
Name: backup01
[...]
CSI Snapshots:
application-space/application-pvc:
Data Movement:
Operation ID: du-[...]
Data Mover: velero
Uploader Type: kopia
Moved data Size (bytes): 946
Result: succeeded
[...]
You now have a backup of your Kubernetes application’s namespace deployment, including a snapshot of its storage volume, stored in an S3 remote location. If you want to further verify the backup assets that Velero (with help from LINSTOR) created, you can log in to your AWS S3 console and look inside your bucket. If you have the AWS CLI utility installed and configured, you can examine your bucket contents by entering the following command.
aws s3 ls s3://linbit-velero-backup --recursive --human-readable --summarize
Restoring your backup
Before disaster or migration needs require it, test that you can restore your backed up namespace deployment to a target Kubernetes cluster. First you will need to create a second cluster that will be your target, if you have not done so already.
Creating a second Kubernetes cluster and installing LINSTOR
Create a second (target) Kubernetes cluster and install LINSTOR in it, just as you did with your first (source) Kubernetes cluster.
If this was a production deployment, and if this was part of a disaster recovery solution, you would want your second Kubernetes cluster at a different physical site from your source Kubernetes cluster.
âť—IMPORTANT:Â In your target cluster, just as in your source cluster, you will need to have an enabled snapshot feature that is supported by the Kubernetes CSI driver. You also need to have theÂ
snapshot-controller pods running in your target Kubernetes cluster. See the prerequisites section if you need to refresh your memory about the requirements for the cluster.
Recreating the storage class and volume snapshot class from the original cluster
To restore your backup, you will need to prepare a storage class and a volume snapshot class in your newly created target Kubernetes cluster.
In your target Kubernetes cluster, follow the instructions within the Creating a Storage Class for Your Application’s Persistent Data and Creating a volume snapshot class sections shown earlier.
Installing Velero in your target Kubernetes cluster
To use Velero to restore your S3 backup to your target cluster, you will need to install Velero.
If you have not already done this, you can use the instructions in the Installing Velero section of this tutorial and repeat them on your target cluster.
âť—IMPORTANT:Â Just as with theÂ
velero install command on the first (source) Kubernetes cluster, you also need to use aÂ--use-node-agent option on the second (target) cluster. Without the node agent, you will not be able to use Velero to download the backup data.
Using Velero to restore your backup to your target Kubernetes cluster
After preparing your target Kubernetes cluster, you can now use Velero to restore your backed up Kubernetes application namespace, including the application’s persistent data volume. LINSTOR will automatically restore the storage class volume from the LVM or ZFS volume snapshot that was created as part of your backup.
Enter the command:
velero restore create --from-backup=backup01
You will receive a message that your Velero restore command was submitted successfully.
You can use the Velero restore get command to verify that your restore command completed successfully:
velero restore get
Output from this command should show that the backup restore process “Completed” with “0” errors.
Verifying your restored application
Next, you can verify that Velero restored your backed up application and its persistent data storage by confirming that the NGINX pod is up and running in your target cluster and that it is serving your test webpage.
First, change your context to the application-space namespace, so that the kubectl commands that follow will happen within the application-space namespace:
kubectl config set-context --current --namespace application-space
Output from the command will show that the context has changed:
Context "kubernetes-admin@kubernetes" modified.
Then, confirm that you have an NGINX pod up and running:
kubectl get pods
Output from the command will show that the NGINX pod is up and running:
NAME READY STATUS RESTARTS AGE
nginx-[...] 1/1 Running 0 70m
Next, confirm that you have a NodePort service restored and running:
kubectl get service
Output from the command will list the nginx NodePort service:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.96.153.43 <none> 80:30544/TCP 71m
With the mapped port from the command output, 30544 in the example above, you can use the curl command to verify that your NGINX pod is serving your webpage from your target cluster:
curl localhost:30544
Command output will show your webpage, in HTML:
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file checksum: 587418795 511 /usr/share/nginx/html/velero.testing.file</p>
</body>
</html>
You can further verify that Velero and LINSTOR restored your application’s persistent data storage by running a checksum command against the unique file within the NGINX pod and compare it to your earlier result:
kubectl exec deployment/nginx -- cksum /usr/share/nginx/html/velero.testing.file
Output from the command will be similar to the following:
587418795 511 /usr/share/nginx/html/velero.testing.file
By entering a df command from within your NGINX pod, you can verify that in your target cluster, the NGINX data location is backed by a LINSTOR-created DRBD® block device:
kubectl exec deployment/nginx -- df -h | awk 'NR==1 || /nginx/'
Output from the command will show a DRBD device backing the NGINX data location.
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1000 978M 32K 910M 1% /usr/share/nginx/html
Next, verify that LINSTOR is managing your application’s persistent data storage in your target cluster, as it was in your source cluster. You can run the LINSTOR resource list command within the LINSTOR controller pod to show this.
kubectl exec deployment/linstor-controller \
--namespace linbit-sds -- linstor resource list
Or if you installed the kubectl-linstor utility, you can enter:
kubectl linstor resource list
Output from the command will show two up-to-date diskful resources and one diskless tie-breaker resource:
+--------------------------------------------------------------------------------+
| ResourceName | Node | Layers | Usage | Conns | State | CreatedOn |
|================================================================================|
| pvc-[...] | kube-0 | DRBD,STORAGE | Unused | Ok | UpToDate | [...] |
| pvc-[...] | kube-1 | DRBD,STORAGE | Unused | Ok | TieBreaker | [...] |
| pvc-[...] | kube-2 | DRBD,STORAGE | InUse | Ok | UpToDate | [...] |
+--------------------------------------------------------------------------------+
Conclusion
Congratulations! You have successfully backed up and restored a Kubernetes namespace deployment and its persistent data storage between two LINSTOR clusters, by using Velero and S3 storage. Using Velero, LINSTOR, and snapshot-capable storage volumes, you have created a solid system for backing up and restoring your Kubernetes deployments.
While this solution involves a significant amount of preparation, with a remote, off-site S3 backup and a second off-site Kubernetes cluster in place, ready to restore your S3 backup to, you can have confidence that when disaster strikes your production cluster, you have a recovery plan ready.
You can also adapt and use the instructions in this tutorial to migrate a Kubernetes application and its data storage to another cluster, for example if you have upgraded hardware.
Your next steps might involve investigating the Velero scheduled backup feature to automate creating your backups.
Changelog
2026-03-04:
- Updated Kubernetes version to 1.35.1 and Velero version to 1.17.2.
- Updated Velero AWS plugin to version 1.13.2.
- Removed references to Velero CSI plugin because this plugin in now a part of the Velero code.
- Added more details to the snapshot controller prerequisite.
- Removed unnecessary instructions for configuring a LINSTOR encryption passphrase and remote to simplify instructions. You can instead directly use Velero and a Velero node agent and data mover to send and receive the snapshot that the LINSTOR CSI driver creates.
- Removed references to theÂ
snapshot-validation-webhook. - Made minor language touch-ups.
2024-07-01:
- Updated CSI snapshotter version number to 8.0.1 and compatible Kubernetes version number to 1.25 or greater.
- Removed separate snapshot-validation-webhook Helm chart install. The snapshot-controller chart now installs this component.
- Updated Kubernetes version to 1.28.1 and Velero version to 1.13.1.
- Updated Velero AWS plugin to version 1.9.1.
- Updated Velero CSI plugin to version 0.7.1.
- Added instructions for creating a LINSTOR encryption passphrase which is needed to be able to create an S3 LINSTOR remote, since LINSTOR Operator v2.
- Separated command output from command input code blocks.
- Added admonition about S3 bucket names needing to be globally unique.
2023-05-12:
- Revised opening paragraph to better describe LINBIT SDS.
- Added a NOTE admonition to explain theÂ
snapshot-controller component. - Other minor language and command touch-ups that did not affect technical content.
2022-09-01:
- Originally wrote article.