Using Velero & LINSTOR to Back Up & Restore a Kubernetes Deployment

This article will describe how you can use LINSTOR® and Velero to back up and restore a Kubernetes deployment, including persistent storage volume data, to and from a remote AWS S3 bucket. This post builds on a solution described in an earlier article, Backing Up Your Kubernetes Deployments With Velero & LINSTOR. You may want to read or revisit that article first because it will give you more background on LINSTOR and Velero. This article will reuse some of the configurations and commands from the first article.

The backup and restore solution in the earlier article was a hybrid solution: Kubernetes’ namespace deployment metadata was backed up to a remote location, while its persistent storage data was replicated locally. This article will offer a remote solution for both.

For an example use case, you will use this backup and restore solution to migrate a Kubernetes NGINX deployment from one Kubernetes with LINSTOR cluster to a second Kubernetes with LINSTOR cluster. You can use the methodology here either for your own migration purposes, or to create a disaster recovery solution, to name two potential uses.

Prerequisites

To successfully use the instructions in this article, you will need to have:

  • An available AWS S3 bucket. This article uses AWS S3 but you might be able to use other cloud provider S3-compatible storage solutions. See the Velero project’s AWS S3 compatible storage providers list for more information.
  • Two running Kubernetes clusters with LINBIT® SDS installed in both. See Integrating LINBIT SDS with Kubernetes, a Quick Start Guide for instructions and repeat for your second cluster.
  • In both clusters, a storage back end that supports a “snapshot” feature, for example, an LVM thin pool or ZFS pool. The storage back end must be the same on both clusters, because a snapshot can only be restored to the same type of back end that it was created from.
  • Helm installed on your Kubernetes controller node in both clusters.
  • An enabled snapshot feature supported by the Kubernetes Container Storage Interface (CSI) driver, and that you have the snapshot-controller and snapshot-validation-webhook pods running in your Kubernetes clusters. You can verify this by using the following command:
# kubectl get crds volumesnapshots.snapshot.storage.k8s.io \ volumesnapshotclasses.snapshot.storage.k8s.io \ volumesnapshotcontents.snapshot.storage.k8s.io 

If you do not get any output from this command showing that these CustomResourceDefinitions (CRDs) were already created, you will need to install the snapshot validation webhook and the volume snapshot controller into your Kubernetes cluster, using Helm charts. Installing the volume snapshot controller from the Helm chart will also install the necessary CRDs. You can verify this by repeating the kubectl get crds command above, after running the following Helm commands:

# helm repo add piraeus-charts https://piraeus.io/helm-charts/
# helm install --namespace kube-system snapshot-validation-webhook \
piraeus-charts/snapshot-validation-webhook
# helm install --namespace kube-system snapshot-controller \
piraeus-charts/snapshot-controller

NOTE: If Helm is not installed, follow these installation instructions from the Helm project, or follow the instructions within the Integrating LINBIT SDS with Kubernetes quick start guide.

Installing Velero

Next, install Velero in both clusters. You will need an AWS S3 bucket available for Velero’s backup and restore sets. This tutorial uses a bucket called linbit-velero-backup.

Create a secret file that has an AWS access key ID and an AWS secret access key for a user with access credentials to your S3 bucket. Use a text editor to create the file /tmp/s3.secret with the following contents:

[default]
aws_access_key_id=<your_S3_access_key_id>
aws_secret_access_key=<your_S3_secret_access_key>

Downloading Velero

Find the appropriate version of Velero for your version of Kubernetes, then download the Velero tar file from the official GitHub repository. This article uses Kubernetes version 1.24.3 and Velero version 1.9.0. You can find the Velero compatability table on Velero’s GitHub page.

# vel_vers=v1.9.0
# curl -fsSL -o velero-$vel_vers-linux-amd64.tar.gz \
https://github.com/vmware-tanzu/velero/releases/download/$vel_vers/velero-$vel_vers-linux-amd64.tar.gz

After verifying the checksum of the downloaded tar file, extract the tar file’s contents, and copy the extracted velero executable file into a directory in your $PATH.

Installing Velero Into Kubernetes

Next, create environment variables for your AWS S3 bucket name, your AWS region, and the version numbers of the Velero AWS plug-in and the Velero CSI plug-in that are compatible with your version of Velero. This article uses versions 1.4.1 and 0.2.0 respectively, for Velero version 1.8.1.

# aws_bucket=linbit-velero-backup
# aws_region=us-east-2
# aws_plugin_vers=v1.5.0
# csi_plugin_vers=v0.3.0

Then, install Velero by using the following command:

# velero install \
--features=EnableCSI \
--use-volume-snapshots \
--plugins=velero/velero-plugin-for-aws:$aws_plugin_vers,velero/velero-plugin-for-csi:$csi_plugin_vers \
--provider aws --bucket $aws_bucket --secret-file /tmp/s3.secret \
--backup-location-config region=$aws_region \
--snapshot-location-config region=$aws_region

Output should show Velero resources being created and eventually show confirmation of a successful deployment and installation.

[...]
Deployment/velero: created
Velero is installed! Use 'kubectl logs deployment/velero -n velero' to view the status.

After a while, entering a kubectl get pods -A command should show that a new Velero server pod, named velero, is up and running. You can also follow the status of the Velero deployment with the following command:

# kubectl logs -f deployment/velero -n velero

Configuring Kubernetes for Velero

To prepare your Kubernetes environment for Velero backups, you need to create:

  • A storage class for your Kubernetes application’s persistent storage
  • A Kubernetes secret for your S3 credentials
  • A volume snapshot class for provisioning volume snapshots

Creating a Storage Class for Your Application’s Persistent Data

First, you will need to create a new storage class within the LINSTOR CSI provisioner. Your Kubernetes application pod will use this storage class for your application’s persistent data. Create a YAML configuration file, /tmp/01_storage_class.yaml, that will describe the storage class.

# cat > /tmp/01_storage_class.yaml <<EOF 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: linbit
provisioner: linstor.csi.linbit.com
parameters:
  autoPlace: "2"
  storagePool: MyThinSP
EOF

IMPORTANT: The storagepool key value pair should reference an existing LINSTOR storage pool. Should you need to verify what storage pools are available, you can enter the command:

# kubectl exec deployment/linstor-op-cs-controller -- linstor storage-pool list

Next, apply this configuration to your Kubernetes cluster.

# kubectl apply -f /tmp/01_storage_class.yaml
storageclass.storage.k8s.io/linbit created

Verify the storage class with this command:

# kubectl get storageclasses
NAME     PROVISIONER              RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
linbit   linstor.csi.linbit.com   Delete          Immediate           false                  3s

If you have arrived from the future and are applying the instructions in this section to your second Kubernetes cluster, you can return to your place in the tutorial by following this link.

Creating an S3 Secret

Create a Kubernetes secret that will hold your S3 credentials. Create a YAML configuration file, /tmp/s3_secret.yaml, that has the following contents:

kind: Secret
apiVersion: v1
metadata:
  name: linstor-csi-s3-access
  namespace: default
immutable: true
type: linstor.csi.linbit.com/s3-credentials.v1
stringData:
  access-key: <your_S3_access_key_id>
  secret-key: <your_S3_secret_key_id>

Next, deploy your secret by applying your configuration file to your Kubernetes cluster.

# kubectl apply -f /tmp/s3_secret.yaml
secret/linstor-csi-s3-access created

Creating a Volume Snapshot Class

Next, create a volume snapshot class for provisioning volume snapshots that Velero will use. Create a YAML configuration file, /tmp/02_snapshot_class.yaml that has the following contents:

apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: linstor-csi-snapshot-class-s3
  labels:
    velero.io/csi-volumesnapshot-class: "true"
driver: linstor.csi.linbit.com
deletionPolicy: Retain
parameters:
parameters:
  snap.linstor.csi.linbit.com/type: S3
  snap.linstor.csi.linbit.com/remote-name: linbit-velero-backup
  snap.linstor.csi.linbit.com/allow-incremental: "true"
  snap.linstor.csi.linbit.com/s3-bucket: linbit-velero-backup
  snap.linstor.csi.linbit.com/s3-endpoint: s3.us-east-2.amazonaws.com
  snap.linstor.csi.linbit.com/s3-signing-region: us-east-2
  # Refer here to the secret that holds access and secret key for the S3 endpoint.
  # See below for an example.
  csi.storage.k8s.io/snapshotter-secret-name: linstor-csi-s3-access
  csi.storage.k8s.io/snapshotter-secret-namespace: default

Next, apply your configuration file to your Kubernetes cluster.

# kubectl apply -f /tmp/02_snapshot_class.yaml
volumesnapshotclass.snapshot.storage.k8s.io/linstor-csi-snapshot-class-s3 created

Configuring Velero and Installing an Example Pod

With Velero installed into your Kubernetes cluster, you can configure it and test its backup and restore capabilities by creating an example pod. The earlier LINSTOR and Velero article used an example MySQL deployment in Kubernetes. In this guide, you will create an NGINX deployment.

Enabling the CSI Feature on the Velero Client Configuration

To prepare to use Velero to back up your Kubernetes metadata, you enabled the Velero CSI feature on the Velero “server” pod, by using the --features flag in your velero install command. You also need to enable this feature within the Velero client, by entering the following command:

# velero client config set features=EnableCSI

You can verify that CSI feature is enabled by entering the following command:

# velero client config get features
features: EnableCSI

Creating an Example Pod

In this article, you will create a basic NGINX pod to test Velero’s backup and restore capabilities.

Changing the Kubernetes Namespace

First, create a new Kubernetes namespace for testing Velero.

# kubectl create namespace application-space

Next, change your context to the “application-space” namespace, so that the kubectl commands that follow will happen within the “application-space” namespace.

# kubectl config set-context --current --namespace application-space
Context "kubernetes-admin@kubernetes" modified.

Creating a Persistent Volume Claim

To have LINSTOR manage your example pod’s persistent storage, you will need to create a persistent volume claim (PVC) that references the linbit storage class that you created earlier. You can do this by first creating a YAML configuration file for the persistent volume claim.

# cat > /tmp/03_pvc.yaml <<EOF
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: application-pvc
  namespace: application-space
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: linbit
EOF

Next, apply the configuration to your Kubernetes cluster.

# kubectl apply -f /tmp/03_pvc.yaml
persistentvolumeclaim/application-pvc created

You can verify the deployment of your PVC by using a kubectl get command.

# kubectl get persistentvolumeclaims 
NAME              STATUS   VOLUME    CAPACITY   ACCESS MODES   STORAGECLASS   AGE
application-pvc   Bound    pvc-[...] 1Gi        RWO            linbit         23s

Creating an Example Pod

With a PVC created, you can create an example NGINX pod that has its data directory (/var/lib/nginx) on the PVC. First, create the pod’s YAML configuration file.

# cat > /tmp/04_nginx.yaml <<EOF
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: application-space
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
        - image: nginx:latest
          name: nginx
          ports:
            - containerPort: 80
              name: nginx-webserver
          volumeMounts:
            - name: nginx-persistent-storage
              mountPath: /usr/share/nginx/html
      volumes:
        - name: nginx-persistent-storage
          persistentVolumeClaim:
            claimName: application-pvc
EOF

Next, apply the configuration to your Kubernetes cluster.

# kubectl apply -f /tmp/04_nginx.yaml
deployment.apps/nginx created

After some time, your example pod should come up within the “application-space” namespace. You can confirm that it is up and running by using a basic kubectl get pods command.

# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
nginx-[...]   1/1     Running   0          63s

You can also watch your pod come up in real time, using the following command:

$ watch kubectl get pods

Testing Velero’s Backup and Restore Functions

With Velero installed and configured and an example pod up and running, you can next test Velero’s backup capabilities to an AWS S3 bucket.

Create a Unique Reference File and Webpage Within Your NGINX Pod

For testing Velero’s backup and restore capabilities, first create a unique reference file in your pod’s NGINX data directory. You can check for this reference file after restoring your backup. This will confirm that LINSTOR created and restored from a snapshot of your application’s persistent storage volume.

Borrowing an example from the Velero documentation, log into your NGINX example pod and create a unique file, then get its checksum for verification purposes later.

# kubectl exec -it deployment/nginx -- /bin/bash
root@nginx-[...]:/# while true; do echo -n "LINBIT " >> /usr/share/nginx/html/velero.testing.file; sleep 0.1s; done
^C
root@nginx-[...]:/# cksum /usr/share/nginx/html/velero.testing.file
3504900342 1600942 /usr/share/nginx/html/velero.testing.file

Next, create a simple web page that references your unique file.

# cat > /usr/share/nginx/html/index.html <<EOF
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
  font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>

<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
EOF

Next, add the checksum of your unique file to your web page and then add closing HTML tags to complete the creation of your web page.

# echo `cksum /usr/share/nginx/html/velero.testing.file` >> /usr/share/nginx/html/index.html
# cat >> /usr/share/nginx/html/index.html <<EOF
</p>

</body>
</html> 
EOF

After creating a simple web page, return to your Kubernetes controller node by exiting your NGINX pod.

root@nginx-[...]:/# exit

Exposing the NGINX Pod’s HTTP Port

After your NGINX pod is up and running, expose its HTTP port (80) to your host system by mapping the pod’s internal port to an available open port on your host system. You can use the nodeport service to do this quickly. If you are working in a production environment however, you would want to use an Ingress resource and Ingress controller for this task.

If port 80 is available on your host system and you are working in a test environment, you can just map your NGINX pod’s port 80 to it by using the following command:

# kubectl create service nodeport nginx --tcp=80:80
service/nginx created

Verifying Your Example Web Page

You can verify that the Kubernetes NodePort service has exposed your NGINX pod on your host system by first getting the nginx nodeport service’s mapped port:

# kubectl get service
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.110.25.255   <none>        80:30433/TCP   1d

Here, the NGINX pod’s port 80 is mapped to port 30433 on the host system. You can next use the curl command to verify the NGINX pod is serving the web page that you created earlier.

# curl localhost:30433
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
  font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>

<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
</p>

</body>
</html>

You are now ready to back up your application to an S3 remote.

Backing Up Your Application’s Deployment to an S3 Remote

Create an AWS S3 stored backup of your application’s Kubernetes deployment metadata by using the following Velero command:

# velero backup create backup01 --include-namespaces application-space --wait
Backup request "backup01" submitted successfully.
Waiting for backup to complete. You may safely press ctrl-c to stop waiting - your backup will continue in the background.
............
Backup completed with status: Completed. You may check for more information using the commands `velero backup describe backup01` and `velero backup logs backup01`.

NOTE: Through the CSI driver, because you configured LINSTOR to back the storage class underlying your application’s PVC, LINSTOR will automatically take a volume snapshot as part of the Velero backup.

After some time, the backup will finish and you should find a backups/backup01 folder within your AWS S3 bucket and within that folder, archives of your deployment’s metadata and volume snapshots.

Next, verify that your Velero backup includes a volume snapshot.

# velero describe backups backup01 --details
Name:         backup01
[...]
CSI Volume Snapshots:
Snapshot Content Name: snapcontent-e73556b6-c524-4ef8-9168-e9e2f4760608
  Storage Snapshot ID: snapshot-e73556b6-c524-4ef8-9168-e9e2f4760608
  Snapshot Size (bytes): 1073741824
  Ready to use: true

To further verify the snapshot, you can SSH into one of your diskful Kubernetes worker nodes. You can run the command lsblk to show that there is an LVM volume with “snapshot” in its name and that it has a UID that matches the CSI volume snapshot ID listed in the output of the velero describe backups command that you ran earlier.

Within Kubernetes, you can also verify that creating a Velero backup also created a snapshot referenced in Kubernetes’ VolumeSnapshotContent, by using the kubectl get VolumeSnapshotContent -A command.

You now have a backup of your Kubernetes application deployment, including a snapshot of its storage volume, stored in an S3 remote location.

Restoring Your Backup

You can next test restoring your backup to another Kubernetes cluster, but first you need to create a second cluster!

Creating a Second Kubernetes Cluster and Install LINSTOR

Create a second (target) Kubernetes cluster and install LINSTOR in it, just as you did with your first (source) Kubernetes cluster.

If this was a production deployment, and if this was part of a disaster recovery solution, you would want your second Kubernetes cluster at a different physical site from your source Kubernetes cluster.

NOTE: In your target cluster, just like in your source cluster, you will need to have an enabled snapshot feature that is supported by the Kubernetes CSI driver, and that you have the snapshot-controller and snapshot-validation-webhook pods running in your Kubernetes cluster.

Recreating the Original Cluster’s Storage Class

To restore your backup, you will need to prepare a storage class in your newly created target Kubernetes cluster.

Follow the instructions within the Creating a Storage Class for Your Application’s Persistent Data section above, within your target Kubernetes cluster, to create an identical storage class to the one in your source Kubernetes cluster.

Adding a LINSTOR Remote to your Target Kubernetes Cluster

To restore your backup, the LINSTOR Controller within Kubernetes needs to know about the details of the remote location where your backup is. You provide this information to the Controller by creating a LINSTOR remote with your S3 bucket’s details:

# s3_bucket_name=<your_s3_bucket_name>
# s3_access_key=<your_access_key_id>
# s3_secret_key=<your_secret_key_id>
# kubectl exec deployment/linstor-op-cs-controller -- \
> linstor remote create s3 linbit-velero-backup \
> s3.us-east-2.amazonaws.com \
> $s3_bucket_name \
> us-east-2 \
> $s3_access_key \
> $s3_secret_key

Replace “us-east-2” with the AWS region where your S3 bucket resides, if it is in a different region.

After running this command, you should receive a message saying that LINSTOR successfully created your remote.

You can verify this by entering the following command:

# kubectl exec deployment/linstor-op-cs-controller -- linstor remote list
+-----------------------------------------------------------------------------------------+
| Name                 | Type | Info                                                      |
|=========================================================================================|
| linbit-velero-backup | S3   | us-east-2.s3.us-east-2.amazonaws.com/linbit-velero-backup |
+-----------------------------------------------------------------------------------------+

NOTE: After verifying that you successfully created a LINSTOR remote on your target cluster, you might want to delete the lines in your BASH history file where you set variables to hold potentially sensitive keys.

Installing Velero in Your Target Kubernetes Cluster

To use Velero to restore your S3 backup to your target cluster, you will need to install Velero.

If you have not already done this, you can use the instructions in the Installing Velero section of this tutorial and repeat them on your target cluster.

Using Velero to Restore Your Backup to Your Target Kubernetes Cluster

After preparing your target Kubernetes cluster, you can now use Velero to restore your backed up Kubernetes application space, included the application’s persistent data volume.

Enter the command:

# velero restore create --from-backup=backup01

You should receive a message that your Velero restore command was submitted successfully.

You can use the Velero restore get command to verify that your restore command completed successfully.

# velero restore get

Output from this command should show that the backup restore “Completed” with “0” errors.

Verifying Your Restored Application

Next, you can verify that Velero restored your backed up application and its persistent data storage by confirming that the NGINX pod is up and running in your target cluster and that it is serving your test web page.

First, change your context to the “application-space” namespace, so that the kubectl commands that follow will happen within the “application-space” namespace.

# kubectl config set-context --current --namespace application-space
Context "kubernetes-admin@kubernetes" modified.

Then, confirm that you have an NGINX pod up and running.

# kubectl get pods
# kubectl get pods
NAME          READY   STATUS    RESTARTS   AGE
nginx-[...]   1/1     Running   0          70m

Next, confirm that you have a NodePort service restored and running.

# kubectl get service
NAME    TYPE       CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
nginx   NodePort   10.96.153.43   <none>        80:30544/TCP   1m

With the mapped port from the command output, 30544 in the example above, you can use the curl command to verify that your NGINX pod is serving your web page from your target cluster.

# curl localhost:30544
<!DOCTYPE html>
<html lang="en">
<head>
<title>Testing Velero and LINSTOR within Kubernetes</title>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<style>
body {
  font-family: Arial, Helvetica, sans-serif;
}
</style>
</head>
<body>

<h1>Backing Up Kubernetes With Velero and LINSTOR</h1>
<p>Unique file's checksum:
3504900342 1600942 /usr/share/nginx/html/velero.testing.file
</p>

</body>
</html>

You can further verify that Velero and LINSTOR restored your application’s persistent data storage by shelling into your NGINX pod and running a checksum command against the unique file that you created in the NGINX pod running in your original Kubernetes cluster.

# kubectl exec -it deployment/nginx -- /bin/bash
root@nginx-[...]:/# cksum /usr/share/nginx/html/velero.testing.file
3504900342 1600942 /usr/share/nginx/html/velero.testing.file

By entering a df command from within your NGINX pod, you can verify that in your target cluster, NGINX’s data location is backed by a DRBD® block device.

root@nginx-[...]:/# df -h | awk 'NR==1 || /nginx/'
Filesystem      Size  Used Avail Use% Mounted on
/dev/drbd1000   980M  4.1M  909M   1% /usr/share/nginx/html

Next, exit out of your NGINX pod, and verify that LINSTOR is managing your application’s persistent data storage in your target cluster, as it was in your source cluster.

root@nginx-[...]:/# exit
# kubectl exec deployment/linstor-op-cs-controller \
--namespace default -- linstor --machine-readable resource list \
| grep -e device_path -e backing_disk
            "device_path": "/dev/drbd1000",
            "backing_disk": "/dev/drbdpool/pvc-[...]",
            "device_path": "/dev/drbd1000",
            "device_path": "/dev/drbd1000",
            "backing_disk": "/dev/drbdpool/pvc-[...]",

The output from this command should show that LINSTOR has deployed three DRBD devices. These devices should be the same as the DRBD backed device in your NGINX pod.

You might notice that the command above only shows two backing disks. This is because in a three-node cluster, LINSTOR will deploy one of the resource replicas as “Diskless” for quorum purposes.

You can run the LINSTOR resource list command within the LINSTOR Operator container storage controller pod to show this.

# kubectl exec deployment/linstor-op-cs-controller --namespace default -- linstor resource list
+---------------------------------------------------------------------------------+
| ResourceName | Node    | Port | Usage  | Conns |    State | CreatedOn           |
|=================================================================================|
| pvc-[...]    | kube-b0 | 7000 | Unused | Ok    | UpToDate | 2022-08-19 12:58:26 |
| pvc-[...]    | kube-b1 | 7000 | InUse  | Ok    | Diskless | 2022-08-19 12:58:25 |
| pvc-[...]    | kube-b2 | 7000 | Unused | Ok    | UpToDate | 2022-08-19 12:58:23 |
+---------------------------------------------------------------------------------+

Conclusion

Congratulations! You have successfully backed up and restored a Kubernetes application and its persistent data storage, using Velero and LINSTOR. While this solution involves a significant amount of preparation, with a remote, off-site S3 backup and a second off-site Kubernetes cluster in place, ready to restore your S3 backup to, you can have confidence that when disaster strikes your production cluster, you have a recovery plan ready. Or, you could adapt and use the instructions in this tutorial to migrate a Kubernetes application and its data storage to another cluster, for example if you have updated hardware.

Your next steps might involve investigating Velero’s scheduled backup feature to automate creating your backups.

Michael Troutman

Michael Troutman

Michael Troutman has an extensive background working in systems administration, networking, and technical support, and has worked with Linux since the early 2000s. Michael's interest in writing goes back to an avid reading filled childhood. Somewhere he still has the rejection letter from a publisher for a choose-your-own-adventure style novella, set in the world of a then popular science fiction role-playing game, cowritten with his grandmother (a romance novelist and travel writer) when at the tender age of 10. In the spirit of the open source community in which LINBIT thrives, Michael works as a Documentation Specialist to help LINBIT document its software and its many uses so that it may be widely understood and used.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.