Authors: Joel Zhou & Michael Troutman
—
In a Kubernetes cluster that has persistent storage managed by LINBIT SDS, you can have a high-availability solution for the data that your applications and services rely on. LINBIT SDS is the open source software-defined storage stack developed by LINBIT® that consists of LINSTOR®, DRBD®, and related utilities. To complement this setup, you will likely want to consider a backup solution for your Kubernetes deployments. This is important for a variety of reasons but mitigating the effects of a local cluster disaster or having a potential migration tool are two valuable uses.
This tutorial will describe how to use Velero to back up and restore a Kubernetes cluster namespace deployment, including its persistent storage data, to and from a remote S3 bucket. With LINBIT SDS deployed into your cluster, LINSTOR automatically manages storage volume snapshots of your application’s persistently stored data. Velero is the tool that you can use to backup and restore these snapshots. It even has a feature to automatically create backups on a schedule.
For an example use case, you will use this backup and restore solution to migrate a Kubernetes NGINX deployment from one Kubernetes with LINBIT SDS cluster to a second Kubernetes with LINBIT SDS cluster. You can use the method here either for your own migration purposes, or to create a disaster recovery solution, to name two potential uses.
Prerequisites
To successfully use the instructions in this tutorial, you will need to have:
- An available AWS S3 bucket. This article uses AWS S3 but you might be able to use other S3-compatible storage providers, for example, MinIO. See the Velero project’s AWS S3 compatible storage providers list for more information. If you are using an AWS S3 bucket, you need to create an IAM user named
velero
and give the user proper permissions on the bucket. Refer to instructions on the AWS plugin page on GitHub. - Two running Kubernetes clusters with LINBIT SDS installed in both. If you have not installed LINBIT SDS yet, you can follow the instructions in the Integrating LINBIT SDS with Kubernetes, a Quick Start Guide and repeat for your second cluster. For compatibility with the Kubernetes Container Storage Interface (CSI) Snapshotter, version 8.0.1, use Kubernetes version >= 1.25. Check the CSI Snapshotter’s releases page for compatibility requirements for the version you are using.
- In both clusters, a storage back end that supports a “snapshot” feature, for example, an LVM thin pool or a thin-provisioned ZFS pool. The storage back end must be the same on both clusters, because a snapshot can only be restored to the same type of back end that it was created from.
- Helm installed on your Kubernetes controller node in both clusters. If Helm is not installed, follow these installation instructions from the Helm project, or follow the instructions within the Integrating LINBIT SDS with Kubernetes quick start guide.
- An enabled snapshot feature that is supported by the Kubernetes Container Storage Interface (CSI) driver, and that you have the snapshot-controller and snapshot-validation-webhook pods running in your Kubernetes clusters.
You can verify this by using the following command to show your
CustomResourceDefinition
API resources (CRDs):
If you do not get any output from this command showing that these CRDs were already created, you will need to install the snapshot validation webhook and the volume snapshot controller into your Kubernetes cluster, by using Helm. Installing the volume snapshot controller from the Helm chart will also install the necessary CRDs. You can verify this by repeating the# kubectl get crds volumesnapshots.snapshot.storage.k8s.io \ volumesnapshotclasses.snapshot.storage.k8s.io \ volumesnapshotcontents.snapshot.storage.k8s.io
kubectl get crds
command above, after running the following Helm commands:
NOTE:The snapshot-controller Helm chart deploys both the snapshot-controller and the snapshot-validation-webhook in a cluster. Piraeus Datastore is a LINBIT developed and Cloud Native Computing Foundation (CNCF) endorsed open source project.# helm repo add piraeus-charts https://piraeus.io/helm-charts/ # helm install --namespace kube-system snapshot-controller \ piraeus-charts/snapshot-controller
Installing Velero
After confirming that your clusters have the necessary prerequisites, you can install Velero in both clusters. You will need an S3 bucket available for Velero’s backup and restore sets. This tutorial uses an AWS S3 bucket called linbit-velero-backup
.
❗ IMPORTANT: If you are creating a new AWS S3 bucket to use with the instructions in this article, your S3 bucket name must be globally unique across all of AWS S3 space. Refer to bucket naming rules in AWS documentation for more details.
Create a secret file that has an AWS access key ID and an AWS secret access key for a user with access credentials to your S3 bucket. Use a text editor to create the file /tmp/s3.secret
with the following contents:
[default]
aws_access_key_id=<your_S3_access_key_id>
aws_secret_access_key=<your_S3_secret_access_key>
❗IMPORTANT: Replace <your_S3_access_key_id> and <your_S3_secret_access_key> with your S3 credentials.
Downloading Velero
Find the appropriate version of Velero for your version of Kubernetes, then download the Velero tar file from the Velero project’s official GitHub repository. This tutorial uses Kubernetes version 1.28.1 and Velero version 1.13.1. You can find the Velero compatibility table on Velero’s GitHub page.
# vel_vers=v1.13.1
# curl -fsSL -o velero-"$vel_vers"-linux-amd64.tar.gz \
https://github.com/vmware-tanzu/velero/releases/download/"$vel_vers"/velero-"$vel_vers"-linux-amd64.tar.gz
After verifying the checksum of the downloaded tar file, extract (tar xvf
) the tar file’s contents, and copy the extracted velero
executable file into a directory in your $PATH
.
Installing Velero Into Kubernetes
Next, create environment variables for your AWS S3 bucket name, your AWS region, and the version numbers of the Velero AWS plugin and the Velero CSI plugin that are compatible with your version of Velero. This article uses versions 1.9.1 and 0.7.1 respectively, for Velero version 1.13.1. You can find compatibility information for the plugins at their respective GitHub pages: Velero plugin for AWS and Velero plugin for CSI.
# aws_bucket=linbit-velero-backup
# aws_region=us-east-2
# aws_plugin_vers=v1.9.1
# csi_plugin_vers=v0.7.1
Then, install Velero by using the following command:
# velero install \
--features=EnableCSI \
--use-volume-snapshots \
--plugins=velero/velero-plugin-for-aws:"$aws_plugin_vers",velero/velero-plugin-for-csi:"$csi_plugin_vers" \
--provider aws --bucket "$aws_bucket" --secret-file /tmp/s3.secret \
--backup-location-config region="$aws_region" \
--snapshot-location-config region="$aws_region"
Output from the command will show Velero resources being created and eventually show confirmation of a successful deployment and installation.
[...]
Deployment/velero: created
Velero is installed! ⛵ Use 'kubectl logs deployment/velero -n velero' to view the status.
After a while, entering a kubectl get pods -n velero
command will show that a new Velero server pod, named velero
, is up and running. You can also follow the status of the Velero deployment with the following command:
# kubectl logs -f deployment/velero -n velero
Configuring Kubernetes for Velero
To prepare your Kubernetes environment for Velero backups, you need to create:
- A storage class for your Kubernetes application’s persistent storage
- A Kubernetes secret for your S3 credentials
- A volume snapshot class for provisioning volume snapshots
- A LINSTOR encryption passphrase
Creating a Storage Class for Your Application’s Persistent Data
First, you will need to create a new storage class within the LINSTOR CSI provisioner. Your Kubernetes application pod will use this storage class for your application’s persistent data. Create a YAML configuration file, /tmp/01_storage_class.yaml
, that will describe the storage class.
# cat > /tmp/01_storage_class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linbit
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "2"
storagePool: MyThinSP
EOF
❗ IMPORTANT: The storagepool key value pair should reference an existing LINSTOR LVM thin pool or ZFS storage pool. If you need to verify what storage pools are available, you can enter the following command:
# kubectl exec deployment/linstor-controller -n linbit-sds -- \ linstor storage-pool list
Next, apply this YAML configuration to your Kubernetes cluster.
# kubectl apply -f /tmp/01_storage_class.yaml
Output from the command will show that the storage class was created.
storageclass.storage.k8s.io/linbit created
You can verify that the storage class exists by entering the following command:
# kubectl get storageclasses
Output from the command will show a storage class for each LINSTOR satellite pod in your Kubernetes deployment.
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
linbit linstor.csi.linbit.com Delete Immediate false 113s
linstor-csi-lvm-thin-r1 linstor.csi.linbit.com Delete Immediate true 31m
linstor-csi-lvm-thin-r2 linstor.csi.linbit.com Delete Immediate true 31m
linstor-csi-lvm-thin-r3 linstor.csi.linbit.com Delete Immediate true 31m
If you have arrived here from the future and are applying the instructions in this section to your second Kubernetes cluster, you can return to your place in the tutorial by following this link.
Creating an S3 Secret
Create a Kubernetes secret that will hold your S3 credentials. To do this, create a YAML configuration file, /tmp/s3_secret.yaml
, that has the following contents:
kind: Secret
apiVersion: v1
metadata:
name: linstor-csi-s3-access
namespace: default
immutable: true
type: linstor.csi.linbit.com/s3-credentials.v1
stringData:
access-key: <your_S3_access_key_id>
secret-key: <your_S3_secret_access_key>
❗IMPORTANT: Replace <your_S3_access_key_id> and <your_S3_secret_access_key> with your S3 credentials.
Next, deploy your secret by applying this configuration file to your Kubernetes cluster.
# kubectl apply -f /tmp/s3_secret.yaml
Output from the command will show that your secret was created.
secret/linstor-csi-s3-access created
Creating a Volume Snapshot Class
Next, create a volume snapshot class for provisioning volume snapshots that Velero will use. Create a YAML configuration file, /tmp/02_snapshot_class.yaml
that has the following contents. Replace values, such as the endpoint, region, remote name, and bucket name, as you might need to, depending on your environment.
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: linstor-csi-snapshot-class-s3
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: linstor.csi.linbit.com
deletionPolicy: Retain
parameters:
snap.linstor.csi.linbit.com/type: S3
snap.linstor.csi.linbit.com/remote-name: linbit-velero-backup
snap.linstor.csi.linbit.com/allow-incremental: "true"
snap.linstor.csi.linbit.com/s3-bucket: linbit-velero-backup
snap.linstor.csi.linbit.com/s3-endpoint: s3.us-east-2.amazonaws.com
snap.linstor.csi.linbit.com/s3-signing-region: us-east-2
# Refer here to the secret that holds access and secret key for the S3 endpoint.
csi.storage.k8s.io/snapshotter-secret-name: linstor-csi-s3-access
csi.storage.k8s.io/snapshotter-secret-namespace: default
Next, apply this configuration file to your Kubernetes cluster.
# kubectl apply -f /tmp/02_snapshot_class.yaml
Output from the command will show that your volume snapshot class was created.
volumesnapshotclass.snapshot.storage.k8s.io/linstor-csi-snapshot-class-s3 created
Creating a LINSTOR Encryption Passphrase
To be able to create an S3 remote object in LINSTOR, you will need to first configure an encryption passphrase. This encryption passphrase is used to securely store the S3 access and secret keys in the LINSTOR database. When using LINSTOR Operator v1, the Operator automatically creates a random encryption passphrase when deploying the LINSTOR Operator in Kubernetes. However, when using LINSTOR Operator v2, as is now recommended and as the example clusters in this article use, you need to do this manually. This gives you more control over the passphase and your deployment, for example, if you needed to access encrypted deployment entries after a reinstall.
Create a Kubernetes secret that will hold your LINSTOR encryption passphrase. To do this, create a YAML configuration file, /tmp/encryption_secret.yaml
, that has the following contents:
---
apiVersion: v1
kind: Secret
metadata:
name: linstor-passphrase
namespace: linbit-sds
data:
# CHANGE THIS TO USE YOUR OWN PASSPHRASE!
# Created by: echo -n "myhighlysecurepassphrase" | base64
MASTER_PASSPHRASE: bXloaWdobHlzZWN1cmVwYXNzcGhyYXNl
---
apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstorcluster
spec:
linstorPassphraseSecret: linstor-passphrase
After creating the secret configuration file, apply it to your deployment by entering the following command:
# kubectl apply -f /tmp/encryption_secret.yaml --server-side
📝 NOTE: By using the
--server-side
flag, you avoid a warning message that you would otherwise get due to a missing annotation in the YAML configuration about when the configuration was last applied. You can omit the--server-side
flag andkubectl
will automatically add the annotation and warn you about it. Optionally, you can add the annotation to the YAML configuration:annotations: kubectl.kubernetes.io/last-applied-configuration: ""
Output from the command will show that your secret was applied.
secret/linstor-passphrase serverside-applied
linstorcluster.piraeus.io/linstorcluster serverside-applied
Configuring Velero and Installing an Example Pod
With Velero installed into your Kubernetes cluster, you can configure it and test its backup and restore capabilities by creating an example pod. In this guide, you will create an NGINX deployment.
Enabling the CSI Feature on the Velero Client Configuration
To prepare to use Velero to back up your Kubernetes metadata, you enabled the Velero CSI feature on the Velero “server” pod, by using the --features
flag in your velero install
command. You also need to enable this feature within the Velero client, by entering the following command:
# velero client config set features=EnableCSI
You can verify that CSI feature is enabled by entering the following command:
# velero client config get features
Output from the command will show the EnableCSI
feature.
features: EnableCSI
Creating an Example Pod
In this tutorial, you will create a basic NGINX pod to test Velero’s backup and restore capabilities.
Changing the Kubernetes Namespace
First, create a new Kubernetes namespace for testing Velero.
# kubectl create namespace application-space
Next, change your context to the application-space
namespace, so that the kubectl
commands that follow will happen within the application-space
namespace.
# kubectl config set-context --current --namespace application-space
Creating a Persistent Volume Claim
To have LINSTOR manage your example pod’s persistent storage, you will need to create a persistent volume claim (PVC) that references the linbit
storage class that you created earlier. You can do this by first creating a YAML configuration file for the persistent volume claim.
# cat > /tmp/03_pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: application-pvc
namespace: application-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: linbit
EOF
Next, apply the configuration to your Kubernetes cluster.
# kubectl apply -f /tmp/03_pvc.yaml
Output from the command will show that the PVC was created.
persistentvolumeclaim/application-pvc created
You can also verify the deployment of your PVC by using a kubectl get
command.
# kubectl get persistentvolumeclaims
Output from the command will show the PVC that you created.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
application-pvc Bound pvc-[...] 1Gi RWO linbit 23s
Creating an Example Pod
With a PVC created, you can create an example NGINX pod that has its data directory (/var/lib/nginx
) on the PVC. First, create the pod’s YAML configuration file.
# cat > /tmp/04_nginx.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: application-space
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
ports:
- containerPort: 80
name: nginx-webserver
volumeMounts:
- name: nginx-persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-persistent-storage
persistentVolumeClaim:
claimName: application-pvc
EOF
Next, apply the configuration to your Kubernetes cluster.
# kubectl apply -f /tmp/04_nginx.yaml
Output from the command will show that your NGINX pod was created.
deployment.apps/nginx created
After some time, your example pod will come up within the application-space
namespace. You can confirm that it is up and running by using a basic kubectl get pods
command.
# kubectl get pods
Eventually, output from the command will show your pod in a running state.
NAME READY STATUS RESTARTS AGE
nginx-[...] 1/1 Running 0 63s
You can also watch your pod come up in real time, by using the following command:
$ watch kubectl get pods
Testing Velero’s Backup and Restore Functions
With Velero installed and configured and an example pod up and running, you can next test Velero’s backup capabilities to an S3 bucket.
Create a Unique Reference File and Webpage Within Your NGINX Pod
For testing Velero’s backup and restore capabilities, first create a unique reference file in your pod’s NGINX data directory. You can check for this reference file after restoring your backup. This will confirm that Velero backed up and restored from a snapshot of your application’s LINSTOR-managed persistent storage volume.
Borrowing an example from the Velero documentation, log in to your NGINX example pod and create a unique file.
# kubectl exec -it deployment/nginx -- /bin/bash
root@nginx-[...]:/# while true; do echo -n "LINBIT " >> /usr/share/nginx/html/velero.testing.file; sleep 0.1s; done
^C
❗ IMPORTANT: Enter Ctrl+c to break out of the
while
loop.
Next, get the checksum of the new file, for verification purposes later.
root@nginx-[...]:/# cksum /usr/share/nginx/html/velero.testing.file
Output from the command will show something similar to the following:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
Next, create a simple web page that references your unique file.
root@nginx-[...]:/# cat > /usr/share/nginx/html/index.html <<EOF
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
EOF
Next, add the checksum of your unique file to your web page and then add closing HTML tags to complete the creation of your web page.
root@nginx-[...]:/# echo `cksum /usr/share/nginx/html/velero.testing.file` >> /usr/share/nginx/html/index.html
root@nginx-[...]:/# cat >> /usr/share/nginx/html/index.html <<EOF
</p>
</body>
</html>
EOF
After creating a simple web page, return to your Kubernetes controller node by exiting your NGINX pod.
root@nginx-[...]:/# exit
Exposing the NGINX Pod’s HTTP Port
After your NGINX pod is up and running, expose its HTTP port (80) to your host system by mapping the pod’s internal port to an available open port on your host system. You can use the NodePort service to do this quickly. If you are working in a production environment however, you would want to use an Ingress resource and Ingress controller for this task.
If port 80 is available on your host system and you are working in a test environment, you can just map your NGINX pod’s port 80 to it by using the following command:
# kubectl create service nodeport nginx --tcp=80:80
Output from the command will show that your service was created.
service/nginx created
Verifying Your Example Web Page
You can verify that the Kubernetes NodePort service has exposed your NGINX pod on your host system by first getting the nginx
nodeport service’s mapped port:
# kubectl get service
Output from the command will list the nginx
NodePort service and show that the service has mapped port 80 in the NGINX pod to a port on your host system.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.110.25.255 <none> 80:30433/TCP 1d
Here, the NGINX pod’s port 80 is mapped to port 30433 on the host system. You can next use the curl
command to verify the NGINX pod is serving the web page that you created earlier.
# curl localhost:30433
Output from the curl
command will show the web page that you created earlier.
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
</p>
</body>
</html>
You are now ready to back up your namespace deployment to an S3 remote.
Backing Up Your Namespace Deployment to an S3 Remote
Create an AWS S3 stored backup of your application’s Kubernetes deployment metadata by using the following Velero command:
# velero backup create backup01 --include-namespaces application-space --wait
Output from the command will be similar to the following:
Backup request "backup01" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
............
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup01` and `velero backup logs backup01`.
📝 NOTE: Because you configured LINSTOR to back the storage class underlying your application’s PVC, LINSTOR, by virtue of the CSI driver, will automatically take a volume snapshot as part of the Velero backup.
After some time, the backup will finish and you will find a backups/backup01
folder within your S3 bucket and within that folder, archives of your deployment’s metadata and volume snapshot contents.
Next, verify that your Velero backup includes a volume snapshot.
# velero describe backups backup01 --details
Output from the command will show the backup that you created.
Name: backup01
[...]
CSI Snapshots:
application-space/application-pvc:
Snapshot:
Operation ID: application-space/velero-application-pvc-[...]/2024-06-24T17:59:22Z
Snapshot Content Name: snapcontent-e73556b6-c524-4ef8-9168-e9e2f4760608
Storage Snapshot ID: snapshot-e73556b6-c524-4ef8-9168-e9e2f4760608
Snapshot Size (bytes): 0
CSI Driver: linstor.csi.linbit.com
[...]
To further verify the snapshot, you can SSH into one of your diskful Kubernetes worker nodes. You can then run the command lsblk
to show that there is an LVM volume with “snapshot” in its name and that it has a UID that matches the CSI volume snapshot ID listed in the output of the velero describe backups
command that you ran earlier.
Within Kubernetes, you can also verify that creating a Velero backup also created a snapshot referenced in Kubernetes’ VolumeSnapshotContent, by using the kubectl get VolumeSnapshotContent -A
command.
You now have a backup of your Kubernetes application’s namespace deployment, including a snapshot of its storage volume, stored in an S3 remote location.
Restoring Your Backup
Before disaster or migration needs require it, test that you can restore your backed up namespace deployment to a target Kubernetes cluster. First you will need to create a second cluster that will be your target, if you have not done so already.
Creating a Second Kubernetes Cluster and Installing LINSTOR
Create a second (target) Kubernetes cluster and install LINSTOR in it, just as you did with your first (source) Kubernetes cluster.
If this was a production deployment, and if this was part of a disaster recovery solution, you would want your second Kubernetes cluster at a different physical site from your source Kubernetes cluster.
📝 NOTE: In your target cluster, just as in your source cluster, you will need to have an enabled snapshot feature that is supported by the Kubernetes CSI driver. You also need to have the snapshot-controller and snapshot-validation-webhook pods running in your target Kubernetes cluster.
Recreating the Original Cluster’s Storage Class
To restore your backup, you will need to prepare a storage class in your newly created target Kubernetes cluster.
Follow the instructions within the Creating a Storage Class for Your Application’s Persistent Data section above, within your target Kubernetes cluster, to create an identical storage class to the one in your source Kubernetes cluster. You will also need to create an encryption passphase in your target cluster, if you have not done so already. The encryption passphrase is needed to create an S3 LINSTOR remote object.
Adding a LINSTOR Remote to your Target Kubernetes Cluster
To restore your backup, the LINSTOR controller pod within Kubernetes needs to know about the details of your backup’s remote location. You provide this information to the LINSTOR controller pod by creating a LINSTOR remote object with your S3 bucket’s details.
# s3_bucket_name=<your_s3_bucket_name>
# s3_access_key=<your_access_key_id>
# s3_secret_key=<your_secret_access_key>
# kubectl exec deployment/linstor-op-cs-controller -- \
> linstor remote create s3 linbit-velero-backup \
> s3.us-east-2.amazonaws.com \
> $s3_bucket_name \
> us-east-2 \
> $s3_access_key \
> $s3_secret_key
❗IMPORTANT: Replace <your_S3_access_key_id> and <your_S3_secret_access_key> with your S3 credentials. Replace <your_s3_bucket_name> with the name of your S3 bucket.
Replace “us-east-2” with the AWS region where your S3 bucket resides, if it is in a different region.
After running this command, you will receive a message saying that LINSTOR successfully created the remote.
You can verify this by entering the following command:
# kubectl exec deployment/linstor-op-cs-controller -- linstor remote list
Output from the command will list the remote that you created.
+-----------------------------------------------------------------------------------------+
| Name | Type | Info |
|=========================================================================================|
| linbit-velero-backup | S3 | us-east-2.s3.us-east-2.amazonaws.com/linbit-velero-backup |
+-----------------------------------------------------------------------------------------+
📝 NOTE: After verifying that you successfully created a LINSTOR remote in your target cluster, you might want to delete the lines in your BASH history file where you set variables to hold potentially sensitive keys.
Installing Velero in Your Target Kubernetes Cluster
To use Velero to restore your S3 backup to your target cluster, you will need to install Velero.
If you have not already done this, you can use the instructions in the Installing Velero section of this tutorial and repeat them on your target cluster.
Using Velero to Restore Your Backup to Your Target Kubernetes Cluster
After preparing your target Kubernetes cluster, you can now use Velero to restore your backed up Kubernetes application namespace, including the application’s persistent data volume. LINSTOR will automatically restore the storage class volume from the LVM or ZFS volume snapshot that was created as part of your backup.
Enter the command:
# velero restore create --from-backup=backup01
You will receive a message that your Velero restore command was submitted successfully.
You can use the Velero restore get
command to verify that your restore command completed successfully.
# velero restore get
Output from this command should show that the backup restore process “Completed” with “0” errors.
Verifying Your Restored Application
Next, you can verify that Velero restored your backed up application and its persistent data storage by confirming that the NGINX pod is up and running in your target cluster and that it is serving your test web page.
First, change your context to the application-space
namespace, so that the kubectl
commands that follow will happen within the application-space
namespace.
# kubectl config set-context --current --namespace application-space
Output from the command will show that the context has changed.
Context "kubernetes-admin@kubernetes" modified.
Then, confirm that you have an NGINX pod up and running.
# kubectl get pods
Output from the command will show that the NGINX pod is up and running.
NAME READY STATUS RESTARTS AGE
nginx-[...] 1/1 Running 0 70m
Next, confirm that you have a NodePort service restored and running.
kubectl get service
Output from the command will list the `nginx` NodePort service.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.96.153.43 <none> 80:30544/TCP 71m
With the mapped port from the command output, 30544 in the example above, you can use the curl
command to verify that your NGINX pod is serving your web page from your target cluster.
# curl localhost:30544
Command output will show your webpage, in HTML.
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
</p>
</body>
</html>
You can further verify that Velero and LINSTOR restored your application’s persistent data storage by shelling into your NGINX pod and running a checksum command against the unique file that you created in the NGINX pod running in your original Kubernetes cluster and compare it to your earlier result.
# kubectl exec -it deployment/nginx -- /bin/bash
root@nginx-[...]:/# cksum /usr/share/nginx/html/velero.testing.file
Output from the command will be similar to the following:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
By entering a df
command from within your NGINX pod, you can verify that in your target cluster, NGINX’s data location is backed by a DRBD block device.
root@nginx-[...]:/# df -h | awk 'NR==1 || /nginx/'
Output from the command will show a DRBD device backing the NGINX data location.
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1000 980M 4.1M 909M 1% /usr/share/nginx/html
Next, exit out of your NGINX pod, and verify that LINSTOR is managing your application’s persistent data storage in your target cluster, as it was in your source cluster.
root@nginx-[...]:/# exit
# kubectl exec deployment/linstor-op-cs-controller \
--namespace default -- linstor --machine-readable resource list \
| grep -e device_path -e backing_disk
Output from this command will show that LINSTOR has deployed three DRBD devices, here /dev/drbd1000
. The device name is the same as the DRBD-backed device in your NGINX pod.
"device_path": "/dev/drbd1000",
"backing_disk": "/dev/drbdpool/pvc-[...]",
"device_path": "/dev/drbd1000",
"device_path": "/dev/drbd1000",
"backing_disk": "/dev/drbdpool/pvc-[...]",
You might notice that the command above only shows two backing disks. This is because in a 3-node cluster, LINSTOR by default will deploy one of the resource replicas as “Diskless” for quorum purposes.
You can run the LINSTOR resource list
command within the LINSTOR storage controller pod to show this.
# kubectl exec deployment/linstor-controller --namespace linbit-sds -- \
linstor resource list
Output from the command will show two up-to-date diskful resources and one diskless resource.
+---------------------------------------------------------------------------------+
| ResourceName | Node | Port | Usage | Conns | State | CreatedOn |
|=================================================================================|
| pvc-[...] | kube-b0 | 7000 | Unused | Ok | UpToDate | 2022-08-19 12:58:26 |
| pvc-[...] | kube-b1 | 7000 | InUse | Ok | Diskless | 2022-08-19 12:58:25 |
| pvc-[...] | kube-b2 | 7000 | Unused | Ok | UpToDate | 2022-08-19 12:58:23 |
+---------------------------------------------------------------------------------+
Conclusion
Congratulations! You have successfully backed up and restored a Kubernetes namespace deployment and its persistent data storage. Using Velero, LINBIT SDS, and snapshot capable storage volumes, you have created a solid system for backing up and restoring your Kubernetes deployments.
While this solution involves a significant amount of preparation, with a remote, off-site S3 backup and a second off-site Kubernetes cluster in place, ready to restore your S3 backup to, you can have confidence that when disaster strikes your production cluster, you have a recovery plan ready.
You can also adapt and use the instructions in this tutorial to migrate a Kubernetes application and its data storage to another cluster, for example if you have upgraded hardware.
Your next steps might involve investigating Velero’s scheduled backup feature to automate creating your backups.
Changelog
2022-09-01:
- Originally written
2023-05-12:
- Revised opening paragraph to better describe LINBIT SDS
- Added a NOTE admonition to explain the
snapshot-controller
component - Other minor language and command that did not affect technical content
2024-07-10:
- Updated CSI snapshotter version number to 8.0.1 and compatible Kubernetes version number to 1.25 or greater.
- Removed separate snapshot-validation-webhook Helm chart install. The snapshot-controller chart now installs this component.
- Updated Kubernetes version to 1.28.1 and Velero version to 1.13.1.
- Updated Velero AWS plugin to version 1.9.1.
- Updated Velero CSI plugin to version 0.7.1.
- Added instructions for creating a LINSTOR encryption passphrase which is needed to be able to create an S3 LINSTOR remote, since LINSTOR Operator v2.
- Separated command output from command input code blocks.
- Added admonition about S3 bucket names needing to be globally unique.