Authors: Joel Zhou & Michael Troutman
—
In a Kubernetes cluster that has persistent storage managed by LINBIT SDS, you can have a high-availability solution for the data that your applications and services rely on. LINBIT SDS is the open source software-defined storage stack developed by LINBIT® that consists of LINSTOR®, DRBD®, and related utilities. To complement this setup, you will likely want to consider a backup solution for your Kubernetes deployments. This is important for a variety of reasons but mitigating the effects of a local cluster disaster or having a potential migration tool are two valuable uses.
This tutorial will describe how to use Velero to back up and restore a Kubernetes cluster namespace deployment, including its persistent storage data, to and from a remote S3 bucket. With LINBIT SDS deployed into your cluster, LINSTOR automatically manages storage volume snapshots of your application’s persistently stored data. Velero is the tool that you can use to backup and restore these snapshots. It even has a feature to automatically create backups on a schedule.
For an example use case, you will use this backup and restore solution to migrate a Kubernetes NGINX deployment from one Kubernetes with LINBIT SDS cluster to a second Kubernetes with LINBIT SDS cluster. You can use the method here either for your own migration purposes, or to create a disaster recovery solution, to name two potential uses.
Prerequisites
To successfully use the instructions in this tutorial, you will need to have:
- An available AWS S3 bucket. This article uses AWS S3 but you might be able to use other S3-compatible storage providers. See the Velero project’s AWS S3 compatible storage providers list for more information.
- Two running Kubernetes clusters with LINBIT SDS installed in both. If you have not installed LINBIT SDS yet, you can follow the instructions in the Integrating LINBIT SDS with Kubernetes, a Quick Start Guide and repeat for your second cluster. For compatibility with the Kubernetes Container Storage Interface (CSI) Snapshotter, version 6.2.1, use Kubernetes version >= 1.20. Check the CSI Snapshotter’s releases page for compatibility requirements for the version you are using.
- In both clusters, a storage back end that supports a “snapshot” feature, for example, an LVM thin pool or ZFS pool. The storage back end must be the same on both clusters, because a snapshot can only be restored to the same type of back end that it was created from.
- Helm installed on your Kubernetes controller node in both clusters. If Helm is not installed, follow these installation instructions from the Helm project, or follow the instructions within the Integrating LINBIT SDS with Kubernetes quick start guide.
- An enabled snapshot feature that is supported by the Kubernetes Container Storage Interface (CSI) driver, and that you have the snapshot-controller and snapshot-validation-webhook pods running in your Kubernetes clusters.
You can verify this by using the following command to show your
CustomResourceDefinition
API resources (CRDs):
# kubectl get crds volumesnapshots.snapshot.storage.k8s.io \
volumesnapshotclasses.snapshot.storage.k8s.io \
volumesnapshotcontents.snapshot.storage.k8s.io
If you do not get any output from this command showing that these CRDs were already created, you will need to install the snapshot validation webhook and the volume snapshot controller into your Kubernetes cluster, using Helm charts. Installing the volume snapshot controller from the Helm chart will also install the necessary CRDs. You can verify this by repeating the
kubectl get crds
command above, after running the following Helm commands:
# helm repo add piraeus-charts https://piraeus.io/helm-charts/
# helm install --namespace kube-system snapshot-validation-webhook \
piraeus-charts/snapshot-validation-webhook
# helm install --namespace kube-system snapshot-controller \
piraeus-charts/snapshot-controller
NOTE: The snapshot-controller is an optional Kubernetes component that is necessary for this use case. You can conveniently install this component by using the Piraeus Datastore Helm chart shown in the command above. Piraeus Datastore is a LINBIT developed and Cloud Native Computing Foundation (CNCF) endorsed open source project.
Installing Velero
After confirming that your clusters have the necessary prerequisites, you can install Velero in both clusters. You will need an S3 bucket available for Velero’s backup and restore sets. This tutorial uses an AWS S3 bucket called linbit-velero-backup
.
Create a secret file that has an AWS access key ID and an AWS secret access key for a user with access credentials to your S3 bucket. Use a text editor to create the file /tmp/s3.secret
with the following contents:
[default]
aws_access_key_id=<your_S3_access_key_id>
aws_secret_access_key=<your_S3_secret_access_key>
IMPORTANT: Replace <your_S3_access_key_id> and <your_S3_secret_access_key> with your S3 credentials.
Downloading Velero
Find the appropriate version of Velero for your version of Kubernetes, then download the Velero tar file from the Velero project’s official GitHub repository. This tutorial uses Kubernetes version 1.26.1 and Velero version 1.11.0. You can find the Velero compatibility table on Velero’s GitHub page.
# vel_vers=v1.11.0
# curl -fsSL -o velero-$vel_vers-linux-amd64.tar.gz \
https://github.com/vmware-tanzu/velero/releases/download/$vel_vers/velero-$vel_vers-linux-amd64.tar.gz
After verifying the checksum of the downloaded tar file, extract the tar file’s contents, and copy the extracted velero
executable file into a directory in your $PATH
.
Installing Velero Into Kubernetes
Next, create environment variables for your AWS S3 bucket name, your AWS region, and the version numbers of the Velero AWS plugin and the Velero CSI plugin that are compatible with your version of Velero. This article uses versions 1.7.0 and 0.5.0 respectively, for Velero version 1.11.0. You can find compatibility information for the plugins at their respective GitHub pages: Velero plugin for AWS and Velero plugin for CSI.
# aws_bucket=linbit-velero-backup
# aws_region=us-east-2
# aws_plugin_vers=v1.7.0
# csi_plugin_vers=v0.5.0
Then, install Velero by using the following command:
# velero install \
--features=EnableCSI \
--use-volume-snapshots \
--plugins=velero/velero-plugin-for-aws:$aws_plugin_vers,velero/velero-plugin-for-csi:$csi_plugin_vers \
--provider aws --bucket $aws_bucket --secret-file /tmp/s3.secret \
--backup-location-config region=$aws_region \
--snapshot-location-config region=$aws_region
Output from the command should show Velero resources being created and eventually show confirmation of a successful deployment and installation.
[...]
Deployment/velero: created
Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.
After a while, entering a kubectl get pods -n velero
command should show that a new Velero server pod, named velero
, is up and running. You can also follow the status of the Velero deployment with the following command:
# kubectl logs -f deployment/velero -n velero
Configuring Kubernetes for Velero
To prepare your Kubernetes environment for Velero backups, you need to create:
- A storage class for your Kubernetes application’s persistent storage
- A Kubernetes secret for your S3 credentials
- A volume snapshot class for provisioning volume snapshots
Creating a Storage Class for Your Application’s Persistent Data
First, you will need to create a new storage class within the LINSTOR CSI provisioner. Your Kubernetes application pod will use this storage class for your application’s persistent data. Create a YAML configuration file, /tmp/01_storage_class.yaml
, that will describe the storage class.
# cat > /tmp/01_storage_class.yaml <<EOF
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linbit
provisioner: linstor.csi.linbit.com
parameters:
autoPlace: "2"
storagePool: MyThinSP
EOF
IMPORTANT: The storagepool key value pair should reference an existing LINSTOR LVM thin pool or ZFS storage pool. If you need to verify what storage pools are available, you can enter the following command:
# kubectl exec deployment/linstor-op-cs-controller -- linstor storage-pool list
Next, apply this YAML configuration to your Kubernetes cluster.
# kubectl apply -f /tmp/01_storage_class.yaml
storageclass.storage.k8s.io/linbit created
Verify the storage class with this command:
# kubectl get storageclasses
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
linbit linstor.csi.linbit.com Delete Immediate false 3s
If you have arrived here from the future and are applying the instructions in this section to your second Kubernetes cluster, you can return to your place in the tutorial by following this link.
Creating an S3 Secret
Create a Kubernetes secret that will hold your S3 credentials. To do this, create a YAML configuration file, /tmp/s3_secret.yaml
, that has the following contents:
kind: Secret
apiVersion: v1
metadata:
name: linstor-csi-s3-access
namespace: default
immutable: true
type: linstor.csi.linbit.com/s3-credentials.v1
stringData:
access-key: <your_S3_access_key_id>
secret-key: <your_S3_secret_access_key>
IMPORTANT: Replace <your_S3_access_key_id> and <your_S3_secret_access_key> with your S3 credentials.
Next, deploy your secret by applying this configuration file to your Kubernetes cluster.
# kubectl apply -f /tmp/s3_secret.yaml
secret/linstor-csi-s3-access created
Creating a Volume Snapshot Class
Next, create a volume snapshot class for provisioning volume snapshots that Velero will use. Create a YAML configuration file, /tmp/02_snapshot_class.yaml
that has the following contents:
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: linstor-csi-snapshot-class-s3
labels:
velero.io/csi-volumesnapshot-class: "true"
driver: linstor.csi.linbit.com
deletionPolicy: Retain
parameters:
snap.linstor.csi.linbit.com/type: S3
snap.linstor.csi.linbit.com/remote-name: linbit-velero-backup
snap.linstor.csi.linbit.com/allow-incremental: "true"
snap.linstor.csi.linbit.com/s3-bucket: linbit-velero-backup
snap.linstor.csi.linbit.com/s3-endpoint: s3.us-east-2.amazonaws.com
snap.linstor.csi.linbit.com/s3-signing-region: us-east-2
# Refer here to the secret that holds access and secret key for the S3 endpoint.
csi.storage.k8s.io/snapshotter-secret-name: linstor-csi-s3-access
csi.storage.k8s.io/snapshotter-secret-namespace: default
Next, apply this configuration file to your Kubernetes cluster.
# kubectl apply -f /tmp/02_snapshot_class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/linstor-csi-snapshot-class-s3 created
Configuring Velero and Installing an Example Pod
With Velero installed into your Kubernetes cluster, you can configure it and test its backup and restore capabilities by creating an example pod. In this guide, you will create an NGINX deployment.
Enabling the CSI Feature on the Velero Client Configuration
To prepare to use Velero to back up your Kubernetes metadata, you enabled the Velero CSI feature on the Velero “server” pod, by using the --features
flag in your velero install
command. You also need to enable this feature within the Velero client, by entering the following command:
# velero client config set features=EnableCSI
You can verify that CSI feature is enabled by entering the following command:
# velero client config get features
features: EnableCSI
Creating an Example Pod
In this tutorial, you will create a basic NGINX pod to test Velero’s backup and restore capabilities.
Changing the Kubernetes Namespace
First, create a new Kubernetes namespace for testing Velero.
# kubectl create namespace application-space
Next, change your context to the application-space
namespace, so that the kubectl
commands that follow will happen within the application-space
namespace.
# kubectl config set-context --current --namespace application-space
Context "[email protected]" modified.
Creating a Persistent Volume Claim
To have LINSTOR manage your example pod’s persistent storage, you will need to create a persistent volume claim (PVC) that references the linbit
storage class that you created earlier. You can do this by first creating a YAML configuration file for the persistent volume claim.
# cat > /tmp/03_pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: application-pvc
namespace: application-space
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: linbit
EOF
Next, apply the configuration to your Kubernetes cluster.
# kubectl apply -f /tmp/03_pvc.yaml
persistentvolumeclaim/application-pvc created
You can verify the deployment of your PVC by using a kubectl get
command.
# kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
application-pvc Bound pvc-[...] 1Gi RWO linbit 23s
Creating an Example Pod
With a PVC created, you can create an example NGINX pod that has its data directory (/var/lib/nginx
) on the PVC. First, create the pod’s YAML configuration file.
# cat > /tmp/04_nginx.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
namespace: application-space
spec:
replicas: 1
selector:
matchLabels:
app: nginx
strategy:
type: Recreate
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx:latest
name: nginx
ports:
- containerPort: 80
name: nginx-webserver
volumeMounts:
- name: nginx-persistent-storage
mountPath: /usr/share/nginx/html
volumes:
- name: nginx-persistent-storage
persistentVolumeClaim:
claimName: application-pvc
EOF
Next, apply the configuration to your Kubernetes cluster.
# kubectl apply -f /tmp/04_nginx.yaml
deployment.apps/nginx created
After some time, your example pod should come up within the application-space
namespace. You can confirm that it is up and running by using a basic kubectl get pods
command.
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-[...] 1/1 Running 0 63s
You can also watch your pod come up in real time, by using the following command:
$ watch kubectl get pods
Testing Velero’s Backup and Restore Functions
With Velero installed and configured and an example pod up and running, you can next test Velero’s backup capabilities to an S3 bucket.
Create a Unique Reference File and Webpage Within Your NGINX Pod
For testing Velero’s backup and restore capabilities, first create a unique reference file in your pod’s NGINX data directory. You can check for this reference file after restoring your backup. This will confirm that Velero backed up and restored from a snapshot of your application’s LINSTOR-managed persistent storage volume.
Borrowing an example from the Velero documentation, log into your NGINX example pod and create a unique file, then get its checksum for verification purposes later.
# kubectl exec -it deployment/nginx -- /bin/bash
[email protected][...]:/# while true; do echo -n "LINBIT " >> /usr/share/nginx/html/velero.testing.file; sleep 0.1s; done
^C
[email protected][...]:/# cksum /usr/share/nginx/html/velero.testing.file
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
Next, create a simple web page that references your unique file.
# cat > /usr/share/nginx/html/index.html <<EOF
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
EOF
Next, add the checksum of your unique file to your web page and then add closing HTML tags to complete the creation of your web page.
# echo `cksum /usr/share/nginx/html/velero.testing.file` >> /usr/share/nginx/html/index.html
# cat >> /usr/share/nginx/html/index.html <<EOF
</p>
</body>
</html>
EOF
After creating a simple web page, return to your Kubernetes controller node by exiting your NGINX pod.
[email protected][...]:/# exit
Exposing the NGINX Pod’s HTTP Port
After your NGINX pod is up and running, expose its HTTP port (80) to your host system by mapping the pod’s internal port to an available open port on your host system. You can use the nodeport
service to do this quickly. If you are working in a production environment however, you would want to use an Ingress resource and Ingress controller for this task.
If port 80 is available on your host system and you are working in a test environment, you can just map your NGINX pod’s port 80 to it by using the following command:
# kubectl create service nodeport nginx --tcp=80:80
service/nginx created
Verifying Your Example Web Page
You can verify that the Kubernetes NodePort service has exposed your NGINX pod on your host system by first getting the nginx
nodeport service’s mapped port:
# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.110.25.255 <none> 80:30433/TCP 1d
Here, the NGINX pod’s port 80 is mapped to port 30433 on the host system. You can next use the curl
command to verify the NGINX pod is serving the web page that you created earlier.
# curl localhost:30433
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
</p>
</body>
</html>
You are now ready to back up your namespace deployment to an S3 remote.
Backing Up Your Namespace Deployment to an S3 Remote
Create an AWS S3 stored backup of your application’s Kubernetes deployment metadata by using the following Velero command:
# velero backup create backup01 --include-namespaces application-space --wait
Backup request "backup01" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
............
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup01` and `velero backup logs backup01`.
NOTE: Because you configured LINSTOR to back the storage class underlying your application’s PVC, LINSTOR, by virtue of the CSI driver, will automatically take a volume snapshot as part of the Velero backup.
After some time, the backup will finish and you should find a backups/backup01
folder within your S3 bucket and within that folder, archives of your deployment’s metadata and volume snapshot contents.
Next, verify that your Velero backup includes a volume snapshot.
# velero describe backups backup01 --details
Name: backup01
[...]
CSI Volume Snapshots:
Snapshot Content Name: snapcontent-e73556b6-c524-4ef8-9168-e9e2f4760608
Storage Snapshot ID: snapshot-e73556b6-c524-4ef8-9168-e9e2f4760608
Snapshot Size (bytes): 1073741824
Ready to use: true
To further verify the snapshot, you can SSH into one of your diskful Kubernetes worker nodes. You can then run the command lsblk
to show that there is an LVM volume with “snapshot” in its name and that it has a UID that matches the CSI volume snapshot ID listed in the output of the velero describe backups
command that you ran earlier.
Within Kubernetes, you can also verify that creating a Velero backup also created a snapshot referenced in Kubernetes’ VolumeSnapshotContent, by using the kubectl get VolumeSnapshotContent -A
command.
You now have a backup of your Kubernetes application’s namespace deployment, including a snapshot of its storage volume, stored in an S3 remote location.
Restoring Your Backup
Before disaster or migration needs require it, test that you can restore your backed up namespace deployment to a target Kubernetes cluster. First you will need to create a second cluster that will be your target, if you have not done so already.
Creating a Second Kubernetes Cluster and Installing LINSTOR
Create a second (target) Kubernetes cluster and install LINSTOR in it, just as you did with your first (source) Kubernetes cluster.
If this was a production deployment, and if this was part of a disaster recovery solution, you would want your second Kubernetes cluster at a different physical site from your source Kubernetes cluster.
NOTE: In your target cluster, just as in your source cluster, you will need to have an enabled snapshot feature that is supported by the Kubernetes CSI driver. You also need to have the snapshot-controller and snapshot-validation-webhook pods running in your target Kubernetes cluster.
Recreating the Original Cluster’s Storage Class
To restore your backup, you will need to prepare a storage class in your newly created target Kubernetes cluster.
Follow the instructions within the Creating a Storage Class for Your Application’s Persistent Data section above, within your target Kubernetes cluster, to create an identical storage class to the one in your source Kubernetes cluster.
Adding a LINSTOR Remote to your Target Kubernetes Cluster
To restore your backup, the LINSTOR Controller within Kubernetes needs to know about the details of your backup’s remote location. You provide this information to the LINSTOR Controller by creating a LINSTOR remote object with your S3 bucket’s details:
# s3_bucket_name=<your_s3_bucket_name>
# s3_access_key=<your_access_key_id>
# s3_secret_key=<your_secret_access_key>
# kubectl exec deployment/linstor-op-cs-controller -- \
> linstor remote create s3 linbit-velero-backup \
> s3.us-east-2.amazonaws.com \
> $s3_bucket_name \
> us-east-2 \
> $s3_access_key \
> $s3_secret_key
IMPORTANT: Replace _ and with your S3 credentials. Replace _ with the name of your S3 bucket.
Replace “us-east-2” with the AWS region where your S3 bucket resides, if it is in a different region.
After running this command, you should receive a message saying that LINSTOR successfully created the remote.
You can verify this by entering the following command:
# kubectl exec deployment/linstor-op-cs-controller -- linstor remote list
+-----------------------------------------------------------------------------------------+
| Name | Type | Info |
|=========================================================================================|
| linbit-velero-backup | S3 | us-east-2.s3.us-east-2.amazonaws.com/linbit-velero-backup |
+-----------------------------------------------------------------------------------------+
NOTE: After verifying that you successfully created a LINSTOR remote in your target cluster, you might want to delete the lines in your BASH history file where you set variables to hold potentially sensitive keys.
Installing Velero in Your Target Kubernetes Cluster
To use Velero to restore your S3 backup to your target cluster, you will need to install Velero.
If you have not already done this, you can use the instructions in the Installing Velero section of this tutorial and repeat them on your target cluster.
Using Velero to Restore Your Backup to Your Target Kubernetes Cluster
After preparing your target Kubernetes cluster, you can now use Velero to restore your backed up Kubernetes application namespace, including the application’s persistent data volume. LINSTOR will automatically restore the storage class volume from the LVM or ZFS volume snapshot that was created as part of your backup.
Enter the command:
# velero restore create --from-backup=backup01
You should receive a message that your Velero restore command was submitted successfully.
You can use the Velero restore get
command to verify that your restore command completed successfully.
# velero restore get
Output from this command should show that the backup restore process “Completed” with “0” errors.
Verifying Your Restored Application
Next, you can verify that Velero restored your backed up application and its persistent data storage by confirming that the NGINX pod is up and running in your target cluster and that it is serving your test web page.
First, change your context to the application-space
namespace, so that the kubectl
commands that follow will happen within the application-space
namespace.
# kubectl config set-context --current --namespace application-space
Context "[email protected]" modified.
Then, confirm that you have an NGINX pod up and running.
# kubectl get pods
# kubectl get pods
NAME READY STATUS RESTARTS AGE
nginx-[...] 1/1 Running 0 70m
Next, confirm that you have a NodePort
service restored and running.
# kubectl get service
# kubectl get service
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
nginx NodePort 10.96.153.43 <none> 80:30544/TCP 71m
With the mapped port from the command output, 30544 in the example above, you can use the curl
command to verify that your NGINX pod is serving your web page from your target cluster.
# curl localhost:30544
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>
<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
</p>
</body>
</html>
You can further verify that Velero and LINSTOR restored your application’s persistent data storage by shelling into your NGINX pod and running a checksum command against the unique file that you created in the NGINX pod running in your original Kubernetes cluster and compare it to your earlier result.
# kubectl exec -it deployment/nginx -- /bin/bash
[email protected][...]:/# cksum /usr/share/nginx/html/velero.testing.file
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
By entering a df
command from within your NGINX pod, you can verify that in your target cluster, NGINX’s data location is backed by a DRBD block device.
[email protected][...]:/# df -h | awk 'NR==1 || /nginx/'
Filesystem Size Used Avail Use% Mounted on
/dev/drbd1000 980M 4.1M 909M 1% /usr/share/nginx/html
Next, exit out of your NGINX pod, and verify that LINSTOR is managing your application’s persistent data storage in your target cluster, as it was in your source cluster.
[email protected][...]:/# exit
# kubectl exec deployment/linstor-op-cs-controller \
--namespace default -- linstor --machine-readable resource list \
| grep -e device_path -e backing_disk
"device_path": "/dev/drbd1000",
"backing_disk": "/dev/drbdpool/pvc-[...]",
"device_path": "/dev/drbd1000",
"device_path": "/dev/drbd1000",
"backing_disk": "/dev/drbdpool/pvc-[...]",
The output from this command shows that LINSTOR has deployed three DRBD devices, here /dev/drbd1000
. The device name is the same as the DRBD-backed device in your NGINX pod.
You might notice that the command above only shows two backing disks. This is because in a three-node cluster, LINSTOR by default will deploy one of the resource replicas as “Diskless” for quorum purposes.
You can run the LINSTOR resource list
command within the LINSTOR Operator container storage controller pod to show this.
# kubectl exec deployment/linstor-op-cs-controller --namespace default -- linstor resource list
+---------------------------------------------------------------------------------+
| ResourceName | Node | Port | Usage | Conns | State | CreatedOn |
|=================================================================================|
| pvc-[...] | kube-b0 | 7000 | Unused | Ok | UpToDate | 2022-08-19 12:58:26 |
| pvc-[...] | kube-b1 | 7000 | InUse | Ok | Diskless | 2022-08-19 12:58:25 |
| pvc-[...] | kube-b2 | 7000 | Unused | Ok | UpToDate | 2022-08-19 12:58:23 |
+---------------------------------------------------------------------------------+
Conclusion
Congratulations! You have successfully backed up and restored a Kubernetes namespace deployment and its persistent data storage. Using Velero, LINBIT SDS, and snapshot capable storage volumes, you have created a solid system for backing up and restoring your Kubernetes deployments.
While this solution involves a significant amount of preparation, with a remote, off-site S3 backup and a second off-site Kubernetes cluster in place, ready to restore your S3 backup to, you can have confidence that when disaster strikes your production cluster, you have a recovery plan ready.
You can also adapt and use the instructions in this tutorial to migrate a Kubernetes application and its data storage to another cluster, for example if you have upgraded hardware.
Your next steps might involve investigating Velero’s scheduled backup feature to automate creating your backups.