This article gives you step-by-step instructions for setting up a minikube virtual machine (VM) environment for testing and exploring LINSTOR® in Kubernetes. minikube is a local Kubernetes environment on a single host machine that people often use to get up and running quickly, usually for testing or proof-of-concept purposes. Because installing and setting up minikube is fast and easy, it can be a convenient way to deploy and test LINSTOR in Kubernetes. By using the instructions in this article, you can have a LINSTOR in minikube environment up and running in about 20-30 minutes.
LINSTOR in Kubernetes is a production-ready solution used worldwide for provisioning and managing persistent, highly available block storage. Deployments and use cases vary. Some organizations use LINSTOR to deploy thousands of storage resources across hundreds of nodes. In contrast, the instructions in this article will get you started quickly in a non-production environment intended for exploring and testing.
Have fun exploring and testing LINSTOR in minikube, but if you need a production-ready solution, use LINSTOR in Kubernetes. Or else use LINSTOR in OpenShift, or with SUSE Rancher. LINSTOR is a certified offering for OpenShift and SUSE environments.
Installing and setting up a minikube environment
Before you install minikube, verify that you meet some prerequisites. Your Linux host system should have:
- KVM and related utilities, such as
virsh
andqemu-img
, installed - About 23GiB of free disk space
- About 20GiB of free memory
- An Internet connection
Consider these disk space and memory recommendations as a baseline. Deploying additional applications or services might require more resources.
❗ IMPORTANT: Instructions in this article show deploying LINSTOR in minikube by using containers and images from LINSTOR customer repositories. If you are not a LINBIT® customer, you can contact the LINBIT team for free evaluation access. Alternatively, you can deploy the open source upstream CNCF Sandbox project, Piraeus Datastore.
Installing minikube by downloading a binary file
Download and install the latest minikube Linux binary release by entering the following commands:
curl -LO https://github.com/kubernetes/minikube/releases/latest/download/minikube-linux-amd64
sudo install minikube-linux-amd64 /usr/local/bin/minikube && \
rm minikube-linux-amd64
Starting minikube
Start a minikube environment consisting of three KVM-based VMs.
minikube start \
--profile linstor-testing \
--driver=kvm2 \
--container-runtime containerd \
--cni calico \
--nodes 3 \
--cpus 4 \
--memory '8g'
You can adjust environment options to suit your testing needs. Refer to the minikube documentation for details about the minikube start
command and options.
Installing kubectl
Download and install the kubectl
binary.
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl &&
rm kubectl
Verifying kubectl
installation
Enter the following commands to verify that you installed kubectl
successfully:
kubectl version
kubectl cluster-info
Output should be similar to this:
Client Version: v1.33.1
Kustomize Version: v5.6.0
Server Version: v1.33.1
Kubernetes control plane is running at https://192.168.39.135:8443
CoreDNS is running at https://192.168.39.135:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
[...]
Installing kubectl
autocompletion for Bash
For convenience, install Bash autocompletion for kubectl
.
kubectl completion bash | sudo tee /etc/bash_completion.d/kubectl
. /etc/bash_completion.d/kubectl
Simplifying LINSTOR command entry in minikube
Install the kubectl-linstor
utility to simplify entering LINSTOR commands in minikube. You can download the utility by entering the following commands:
KL_VERS=0.3.2
KL_ARCH=linux_amd64
curl -L -O \
https://github.com/piraeusdatastore/kubectl-linstor/releases/download/v$KL_VERS/kubectl-linstor_v"$KL_VERS"_$KL_ARCH.tar.gz
📝 NOTE: Change the value of the
KL_VERS
variable to the latest release number shown on thekubectl-linstor
releases page.
Extract the tar archive file and then install the kubectl-linstor
utility by entering the following commands:
tar xvf kubectl-linstor_v"$KL_VERS"_$KL_ARCH.tar.gz
sudo install kubectl-linstor /usr/local/bin/
Installing LINSTOR in minikube
After deploying a 3-node KVM-based minikube cluster, and installing some utilities for management and convenience, you can take steps to install LINSTOR in minikube.
First, enter the following command to get status of all your pods:
kubectl get pods -A
Verify that all pods are up and running before you continue.
Deploying the LINSTOR Operator
To install LINSTOR in minikube (or Kubernetes, or OpenShift, or other Kubernetes flavors) you can use the LINSTOR (or Piraeus Datastore) Operator.
As a LINBIT customer, or with free evaluation access, create a kustomization file to deploy the LINSTOR Operator in minikube.
USERNAME=<LINBIT-customer-username>
PASSWORD=<LINBIT-customer-password>
cat <<EOF > kustomization.yaml
---
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
namespace: linbit-sds
resources:
- https://charts.linstor.io/static/latest.yaml
generatorOptions:
disableNameSuffixHash: true
secretGenerator:
- name: drbdio-pull-secret
type: kubernetes.io/dockerconfigjson
literals:
- .dockerconfigjson={"auths":{"drbd.io":{"username":"$USERNAME","password":"$PASSWORD"}}}
EOF
Apply the configuration by entering the following command:
kubectl apply -k .
Wait up to five minutes for pods in the linbit-sds
namespace to be ready.
kubectl wait pod --for=condition=Ready -n linbit-sds --timeout=5m --all && \
echo "LINSTOR Operator deployed and ready!"
Deploying LINSTOR in minikube by using the LINSTOR Operator
After deploying the LINSTOR Operator, you can use it to install LINSTOR.
Create a LINSTOR cluster configuration by entering the following command:
cat << EOF > linstor-deploy.yaml
---
apiVersion: piraeus.io/v1
kind: LinstorCluster
metadata:
name: linstorcluster
spec: {}
EOF
Next, apply the configuration to deploy a LINSTOR cluster in minikube.
kubectl apply -f linstor-deploy.yaml -n linbit-sds
Watch and wait for pods, except for the linstor-csi-node
and linstor-satellite
pods, to come up.
watch -c kubectl get pods -n linbit-sds
Configuring the DRBD module loader container
The linstor-csi-node
and linstor-satellite
pods will not come up in your minikube deployment because the VMs do not have the necessary Linux kernel headers and DRBD® kernel module objects. The drbd-module-loader
container within the linstor-satellite
pods need these to build the DRBD kernel module.
To overcome this, you can apply a configuration file from https://charts.linstor.io that will provide the necessary infrastructure for building the DRBD kernel module.
kubectl apply -f https://charts.linstor.io/deploy/minikube.yaml
Wait up to five minutes for all pods in the linbit-sds
namespace to be ready.
kubectl wait pod --for=condition=Ready -n linbit-sds --timeout=5m --all && \
echo "LINSTOR installed and ready!"
Verifying the LINSTOR in minikube deployment
If you installed the kubectl-linstor
utility, you can enter the following command to verify that the LINSTOR controller service is up and running in minikube:
kubectl linstor controller version
If you did not install the kubectl-linstor
utility, you can enter this lengthier command:
kubectl -n linbit-sds exec deployment/linstor-controller -- \
linstor controller version
Output should show the version of the running LINSTOR controller service:
linstor controller 1.31.0; GIT-hash: a187af5c85a96bb27df87a5eab0bcf9dd6de6a34
You can further verify your LINSTOR deployment by listing the nodes in your LINSTOR in minikube cluster.
kubectl linstor node list
Output will be similar to this:
+------------------------------------------------------------------------+
| Node | NodeType | Addresses | State |
|========================================================================|
| linstor-testing | SATELLITE | 10.244.115.5:3366 (PLAIN) | Online |
| linstor-testing-m02 | SATELLITE | 10.244.113.5:3366 (PLAIN) | Online |
| linstor-testing-m03 | SATELLITE | 10.244.136.133:3366 (PLAIN) | Online |
+------------------------------------------------------------------------+
Preparing minikube VMs for LINSTOR storage
Before using LINSTOR to deploy persistent and highly available storage volumes in minikube, you need to take a few more steps to prepare your minikube environment.
Creating a second disk to back a LINSTOR storage pool
Upcoming instructions in this article will show creating LINSTOR in minikube storage assets that are backed by a second virtual disk in each minikube VM. That disk does not exist yet, so take some steps to create it and prepare the minikube VMs for LINSTOR managed storage.
First, get the names of the minikube VMs by entering the following command:
minikube status -p linstor-testing|grep -B 1 type\:
Output should be similar to this:
linstor-testing
type: Control Plane
--
linstor-testing-m02
type: Worker
--
linstor-testing-m03
type: Worker
Enter the following command to show the names of your minikube VMs as they are known in QEMU/KVM:
virsh list
Output should be similar to this:
Id Name State
-------------------------------------
4 linstor-testing running
5 linstor-testing-m02 running
6 linstor-testing-m03 running
Verify that the VMs are known by the same names in QEMU/KVM as in minikube.
Creating an additional virtual disk for use with KVM
Create a dynamically allocated 5GiB QCOW2 virtual disk for each minikube VM, by entering the following commands 1:
MINIKUBEVMS=`virsh list --all --name | grep linstor-testing`
DISKNAME=vdb
for vm in $MINIKUBEVMS; do
sudo qemu-img create \
-f qcow2 \
-o preallocation=off \
/var/lib/libvirt/images/"$vm"-"$DISKNAME".qcow2 5G
done
Next, set permissions on each virtual disk.
for vm in $MINIKUBEVMS; do
sudo chmod 744 /var/lib/libvirt/images/"$vm"-"$DISKNAME".qcow2
done
Next, persistently attach the newly created virtual disks to minikube VMs.
for vm in $MINIKUBEVMS; do
virsh attach-disk \
--domain $vm \
/var/lib/libvirt/images/"$vm"-"$DISKNAME".qcow2 "$DISKNAME" \
--driver qemu \
--subdriver qcow2 \
--targetbus virtio \
--persistent
done
Output from this command will show “Disk attached successfully”.
Verifying additional disk attachment
Enter the following command to verify that the additional disks that you created are attached to the minikube VMs:
for vm in $MINIKUBEVMS; do
virsh domblklist --domain $vm
done
The following example output shows virtual disks for the linstor-testing-m03
minikube VM, including the additional vdb
disk.
Target Source
----------------------------------------------------------------
hdc [...]/linstor-testing-m03/boot2docker.iso
hda [...]linstor-testing-m03/linstor-testing-m03.rawdisk
vdb /var/lib/libvirt/images/linstor-testing-m03-vdb.qcow2
Next, restart the minikube VMs so that each VM can recognize and use its additional disk.
minikube stop -p linstor-testing
minikube start -p linstor-testing
Wait up to five minutes for all pods in the linbit-sds
namespace to be ready.
kubectl wait pod --for=condition=Ready -A --timeout=5m --all && \
echo "All pods are up and running!"
Verifying minikube VMs recognize additional disks
After restarting the minikube VMs, verify that the operating system within each VM detects the additional disk.
for vm in $MINIKUBEVMS; do
echo $vm\:
minikube ssh -p linstor-testing -n $vm -- \
lsblk|grep -E 'vd[a-z]\b'
done
The following example output shows the additional vdb
disk recognized in the linstor-testing
minikube VM.
linstor-testing:
vda 253:0 0 19.5G 0 disk
vdb 253:16 0 5G 0 disk
Setting an LVM global filter
When using LINSTOR together with LVM and DRBD, it is recommended to set the global_filter
value in the LVM global configuration file (/etc/lvm/lvm.conf
) to ignore DRBD devices. Refer to the LINSTOR User Guide for more background.
Create a local LVM configuration file with a global filter set to ignore DRBD devices.
sudo lvmconfig --type default > lvm.conf
grep -n global_filter lvm.conf
# change 17 below if the `grep` command shows a line number other than 16
sed -i '17i\ global_filter = [ "r|^/dev/drbd|" ]' lvm.conf
Copy the local lvm.conf
file to each minikube VM.
MINIKUBEVMS=`virsh list --all --name | grep linstor-testing`
for vm in $MINIKUBEVMS; do
scp -i $(minikube -p linstor-testing -n $vm ssh-key) lvm.conf \
docker@$(minikube -p linstor-testing -n $vm ip):/home/docker/lvm.conf
minikube ssh -p linstor-testing -n $vm -- \
sudo cp /home/docker/lvm.conf /etc/lvm/lvm.conf
done
📝 NOTE: If you have rebooted, or stopped and restarted the minikube cluster, the VM SSH key fingerprint could have changed for each minikube VM. You might need to run the following command, before trying to copy the local
lvm.conf
file to the minikube VMs:for vm in $MINIKUBEVMS; do ssh-keygen -f "$HOME/.ssh/known_hosts" \ -R "$(minikube -p linstor-testing -n $vm ip)" done
Enter the following command to verify that the global_filter
ignoring DRBD devices is in the LVM configuration file in each minikube VM:
for vm in $MINIKUBEVMS; do
minikube ssh -p linstor-testing -n $vm -- \
grep global_filter /etc/lvm/lvm.conf
done
Output from the command should show the filter set to ignore DRBD devices.
# global_filter=["a|.*|"]
global_filter = [ "r|^/dev/drbd|" ]
[...]
Next, reload the LVM configuration by entering a physical volume scan command in each minikube VM.
for vm in $MINIKUBEVMS; do
echo $vm\:
minikube ssh -p linstor-testing -n $vm -- \
sudo pvscan
done
Output from the command might show “No matching physical volumes found” but still reload the LVM configuration.
Testing LINSTOR in minikube
You are now ready to use Kubernetes workflows to create a LINSTOR storage pool, from which you can provision highly available persistent volumes in minikube.
Creating a LINSTOR storage pool
Create a LinstorSatelliteConfiguration
configuration that when you apply it, will create a LINSTOR storage pool named linstor-thin
, backed by an LVM thin-provisioned volume, vg1/thin
that LINSTOR will create.
cat << EOF > linstor-storage-pool-thin.yaml
---
apiVersion: piraeus.io/v1
kind: LinstorSatelliteConfiguration
metadata:
name: storage-satellites
spec:
storagePools:
- name: vg1-thin
lvmThinPool:
volumeGroup: vg1
thinPool: thin
source:
hostDevices:
- /dev/vdb
EOF
Apply the configuration.
kubectl apply -f linstor-storage-pool-thin.yaml
Verify that you have a LINSTOR storage pool named vg1-thin
on each LINSTOR node.
kubectl linstor storage-pool list -s vg1-thin
Output from the command will be similar to this:
╭──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ vg1-thin ┊ linstor-testing ┊ LVM_THIN ┊ vg1/thin ┊ 4.98 GiB ┊ 4.98 GiB ┊ True ┊ Ok ┊ linstor-testing;vg1-thin ┊
┊ vg1-thin ┊ linstor-testing-m02 ┊ LVM_THIN ┊ vg1/thin ┊ 4.98 GiB ┊ 4.98 GiB ┊ True ┊ Ok ┊ linstor-testing-m02;vg1-thin ┊
┊ vg1-thin ┊ linstor-testing-m03 ┊ LVM_THIN ┊ vg1/thin ┊ 4.98 GiB ┊ 4.98 GiB ┊ True ┊ Ok ┊ linstor-testing-m03;vg1-thin ┊
╰──────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Next steps
LINSTOR storage pools give you a foundation for provisioning and managing highly available persistent storage in your Kubernetes deployments. After creating a LINSTOR storage pool, you can next create a storage class and provision volumes from this storage class by using a persistent volume claim. The how-to guide, “Kubernetes Persistent Storage Using LINBIT SDS Quick Start”, mentioned earlier, can take you through these next steps. You can also refer to details in the LINSTOR User Guide and documentation in the upstream Piraeus Datastore CNCF Sandbox project.
Cleaning up
Here are some cleaning up commands if you need them, after you have finished testing and exploring LINSTOR in minikube, or if you want to explore other ways to configure LINSTOR storage pools in Kubernetes.
Deleting the LINSTOR storage pool
kubectl delete -f linstor-storage-pool-thin.yaml
Deleting the LVM volume group and any volumes within it
MINIKUBEVMS=`virsh list --all --name | grep linstor-testing`
for vm in $MINIKUBEVMS; do
minikube ssh -p linstor-testing -n $vm -- \
sudo vgremove vg1
done
Deleting the LVM physical volume label
for vm in $MINIKUBEVMS; do
minikube ssh -p linstor-testing -n $vm -- \
sudo pvremove /dev/vdb
done
Deleting the LINSTOR namespace
kubectl delete namespaces linbit-sds
Stopping the minikube cluster
minikube stop -p linstor-testing
Deleting the minikube profile and associated VMs
minikube delete -p linstor-testing
Deleting the second virtual disk
If you deleted the minikube profile, linstor-testing
, the command will delete the minikube VMs, and their attached disks. You should not need the following command to delete the additional virtual disk for each minikube VM, but here it is, just in case:
MINIKUBEVMS=`virsh list --all --name | grep linstor-testing`
DISKNAME=vdb
for vm in $MINIKUBEVMS; do
virsh detach-disk --domain $vm $DISKNAME --persistent
sudo rm /var/lib/libvirt/images/"$vm"-"$DISKNAME".qcow2
done
- While not as simple as using the
minikube start --extra-disks
option, using a QCOW2-based extra disk has the advantage that the disk space on your host system that the extra disk consumes is allocated dynamically, rather than statically.↩︎