This blog is a walk-through tutorial on how to deploy LINSTOR®, by using a LINBIT® authored Juju charm, into a MicroK8s environment. You can use LINSTOR as a full-featured and production-ready replacement for MicroK8s’s “hostpath” storage.
Defining Terms
LINSTOR is an open source software-defined storage (SDS) management solution developed by LINBIT. LINBIT also develops DRBD®, a block device replication technology. With LINSTOR managing DRBD-backed storage, you can create production-ready solutions to high-availability and disaster recovery problems.
Juju is an open source Operator Lifecycle Manager (OLM), developed by Canonical, that you can use to deploy applications across a variety of platforms, including Kubernetes.
A “charm” is shorthand for “charmed operator“. In Juju-speak, a charmed operator takes the concept of a Kubernetes operator, and expands it, allowing for such things as integrations with relations between other “charmed” applications, and deployments on other platforms besides Kubernetes, such as bare-metal, virtual machines (VMs), public and private clouds, and more.
Setting up the Testing Environment
This tutorial will use a three node cluster of VMs, running Ubuntu Focal (20.04). You will need at least 4GB of virtual memory assigned to each of your nodes.
Running Out of Memory
When I started setting up an environment to write this tutorial, my virtual machines each had 2GB of allocated memory. This caused problems and not all of my pods reached an up and running state, when using Juju to install the MicroK8s controller. Nominally, MicroK8s on its own requires as little as 540MB of memory. However, Canonical recommends at least 4GB to accommodate workloads. After allocating 4GB of memory to my virtual machines, I was able to install the controller and all of my pods reached an up and running state. And to save the frugal reader some time, I did go back later and try 3GB of allocated memory. That also was not enough!
Running Out of Disk Space
Before you even begin to deploy applications, the setup described here will need about 20-25GB of disk space. Plan accordingly, and if you are using virtual machines to test your deployment, monitor the free disk space available to your virtual machines. Running out of disk space can cause your VMs to lock up or fail.
Installing MicroK8s
The first step is to install MicroK8s into your cluster. MicroK8s is a “lightweight” Kubernetes deployment. A benefit to using MicroK8s, besides its relative simplicity to deploy, is that when you deploy it on three or more nodes and then join the nodes together, MicroK8s automatically becomes highly available.
Repeat the following installation steps on all three of your cluster nodes.
For installation speed and ease, install MicroK8s from its snap offering:
$ sudo snap install microk8s --classic
microk8s (1.25/stable) v1.25.0 from Canonical✓ installed
Add your user to the microk8s
group so that the user can run MicroK8s commands without needing elevated privileges.
$ sudo usermod -a -G microk8s $USER
Also change the ownership recursively on the MicroK8s directory. By default, this is the .kube
directory created in the installing user’s home directory.
$ sudo chown -f -R $USER ~/.kube
Log in again to have the user’s group membership change take effect.
$ su - $USER
Verifying MicroK8s Installation
Entering a status command should show that MicroK8s is running and also show enabled and disabled add-ons.
$ microk8s status --wait-ready
microk8s is running
high-availability: no
datastore master nodes: 127.0.0.1:19001
datastore standby nodes: none
addons:
[...]
Notice the message about MircoK8s’s high-availability status. You will eventually change its status to “yes”, when you join nodes together, later in this tutorial.
You can also show the pods that are up and running by entering the following command:
$ microk8s.kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system calico-node-[...] 1/1 Running 0 4m11s
kube-system calico-kube-controllers-[...] 1/1 Running 0 4m11s
Installing Hostpath Storage and DNS Add-ons
Before you can use Juju to deploy LINSTOR in MicroK8s you will need to enable two add-ons. However, after running the following microk8s.enable
command, you might notice something troubling if you are interested in using MicroK8s in a production environment.
$ microk8s.enable hostpath-storage dns
[...]
Enabling default storage class.
WARNING: Hostpath storage is not suitable for production environments.
[...]
Notice the warning message in the command output. This will be where LINSTOR, a production-ready storage solution, will help. However, the hostpath storage add-on is temporarily necessary to use Juju charms to deploy LINSTOR in MicroK8s.
After a while, two new pods, hostpath-provisioner
and coredns
, will be up and running. You can verify this by entering the following command:
$ microk8s.kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system hostpath-provisioner-[...] 1/1 Running 0 108s
kube-system calico-node-[...] 1/1 Running 0 10m
kube-system calico-kube-controllers-[...] 1/1 Running 0 10m
kube-system coredns-[...] 1/1 Running 0 107s
Installing Juju
Next, install Juju by using a simple snap
command:
$ sudo snap install juju --classic
juju (2.9/stable) 2.9.35 from Canonical✓ installed
Verifying Juju Installation
Verify that you successfully installed Juju by entering:
$ juju version
Since Juju 2 is being run for the first time, it has downloaded the latest public cloud information.
2.9.35-ubuntu-amd64
Installing a Controller
With MicroK8s and Juju installed, you can next use Juju to install a controller. The controller is responsible for maintaining the desired state of your MicroK8s deployment. You will install the controller into the microk8s
cloud1.
A cloud in Juju-speak is an environment for your applications to be deployed onto. MicroK8s is an example of a cloud that Juju knows about. You can list the clouds that Juju is aware of in your installation by using the juju clouds
command.
Next, install a controller into the MicroK8s cloud.
$ juju bootstrap microk8s micro
Creating Juju controller "micro" on microk8s/localhost
[...]
Bootstrap complete, controller "micro" is now available in namespace "controller-micro"
[...]
NOTE: If you enjoy screen output, you can add the
--debug
flag to thejuju bootstrap
command to get more details about the controller installation process.
Verifying Controller
Re-entering the Juju clouds
command should now show that there is a microk8s
cloud available on the newly installed controller.
$ juju clouds
[...]
Clouds available on the controller:
Cloud Regions Default Type
microk8s 1 localhost k8s
[...]
Entering the Juju credentials
command should show that there are controller and client credentials for the microk8s
cloud:
$ juju credentials
Controller Credentials:
Cloud Credentials
microk8s microk8s
Client Credentials:
Cloud Credentials
microk8s microk8s*
You should also see two new pods, controller-0
and modeloperator
, up and running in the controller-micro
namespace.
$ microk8s.kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
[...]
controller-micro controller-0 2/2 Running 2 (10m ago) 10m
controller-micro modeloperator-[...] 1/1 Running 0 9m54s
Making MicroK8s Highly Available
One feature that MicroK8s offers is the ability to join MicroK8s nodes together. When at least three nodes have joined together, MicroK8s will automatically make itself highly available2.
IMPORTANT: Before you join your nodes together, you need to verify that your nodes’ host names resolve to their respective IP addresses in the network that you will be joining your nodes on. If you try to join nodes without proper name resolution, MicroK8s will refuse to join the nodes and will show an error message.
In this tutorial, the nodes have the following lines in each of their /etc/hosts
file:
$ cat /etc/hosts
[...]
192.168.222.20 node-0
192.168.222.21 node-1
192.168.222.22 node-2
Generating a Token For Joining Nodes
To have MicroK8s generate a token so that you can join nodes together, enter the following command on one of your nodes:
$ microk8s add-node
NOTE: The
add-node
command will generate a single-use token. If you do not want to join the first node, and then repeat theadd-node
command before joining the second node, you can enter the commandmicrok8s add-node --token-ttl 300
on the first node. This will generate a token that will be good for five minutes, which should allow you to join the other two nodes using the samemicrok8s join
command.
The output from the command will show commands that you can run on another of your cluster nodes to join nodes together. When you have joined two other nodes to your original node, MicroK8s will become highly available, automatically.
Joining Nodes
On another node, enter the microk8s join
command that contains the IP address of the first node and matches the resolvable IP address in your /etc/hosts
file.
For example:
$ microk8s join 192.168.222.20:25000/[...]
WARNING: Hostpath storage is enabled and is not suitable for multi node clusters.
Contacting cluster at 192.168.222.20
Waiting for this node to finish joining the cluster. .. .. ..
After the node has finished joining the cluster, repeat the above add-node
and join
commands for the third node, if you generated a single-use token earlier.
If you generated a token with a time-to-live (TTL) value earlier, you can just reuse the same join
command on your third node that you used to join your second node.
Verifying MicroK8s High-Availability
You can use a status
command from any node, to verify that after joining the second and third nodes to the first node, MicroK8s is highly available.
$ microk8s status
microk8s is running
high-availability: yes
[...]
Deploying LINSTOR in MicroK8s
With a MicroK8s controller deployed, you can next deploy LINSTOR into MicroK8s. The instructions below were adapted for this tutorial from LINBIT’s LINSTOR charm page on Canonical’s Charmhub.
Prepare a Secret File to Use With the LINSTOR Charm
Before deploying LINSTOR into MicroK8s, you need to create a secret file for the charm to use.
Preparing a Secret File with a LINBIT Customer Account
If you have a LINBIT customer or evaluation account, you can use those credentials to use official LINBIT images to deploy LINSTOR in MicroK8s.
$ DRBD_IO_USERNAME=<username>
$ DRBD_IO_PASSWORD=<password>
$ cat <<EOF > linbit.secret
{
"username": "$DRBD_IO_USERNAME",
"password": "$DRBD_IO_PASSWORD"
}
EOF
Preparing a Secret File without a LINBIT Customer Account
Without a LINBIT customer account, you can deploy LINSTOR into MicroK8s using the LINBIT developed free and open source Piraeus Data-Store. When using LINBIT’s Juju charm to deploy LINSTOR, you only need to create an empty secret file and the charm will take care of the details.
$ touch linbit.secret
Deploying LINSTOR by Using LINBIT’s Juju Charm
To deploy LINSTOR into MicroK8s using LINBIT’s Juju charm, first create a new model, called linbit
, for the deployment. A model in Juju-speak is equivalent to a Kubernetes namespace.
On one of your cluster nodes, enter the following command:
$ juju add-model linbit
Added 'linbit' model on microk8s/localhost with credential 'microk8s' for user 'admin'
The add-model
command also implies juju switch
so that the command creates a new model, linbit
, and also switches to it. The upcoming deploy
command will then deploy LINSTOR into the linbit
model.
Next, deploy LINSTOR into MicroK8s with this simple command:
$ juju deploy ch:linstor
[...]
- add relation linstor-controller:linstor-api - linstor-ha-controller:linstor
- add relation linstor-controller:linstor-api - linstor-csi-controller:linstor
- add relation linstor-controller:linstor-api - linstor-satellite:linstor
- add relation linstor-controller:linstor-api - linstor-csi-node:linstor
Deploy of bundle completed.
Verifying LINSTOR Deployment
After some time, all pods in the linbit
namespace should reach an up and running state, as shown in the output from a microk8s.kubectl get pods --namespace linbit
command. You can also run a juju status --model linbit
command to get the status of the linbit
model and eventually see all the model’s applications and units reach an “active” status.
$ juju status --model linbit
Model Controller Cloud/Region Version SLA Timestamp
linbit micro microk8s/localhost 2.9.35 unsupported 12:41:33Z
App Version Status Scale Charm Channel Rev Address Exposed Message
linstor-controller linstor-controller:v1.20.0 active 1 linstor-controller stable 18 10.152.183.120 no
linstor-csi-controller linstor-csi:v0.20.0 active 1 linstor-csi-controller stable 14 no
linstor-csi-node linstor-csi:v0.20.0 active 3 linstor-csi-node stable 16 no
linstor-ha-controller linstor-k8s-ha-controller:v... active 1 linstor-ha-controller stable 11 no
linstor-satellite linstor-satellite:v1.20.0 active 3 linstor-satellite stable 14 no
Unit Workload Agent Address Ports Message
linstor-controller/0* active idle 10.1.247.39 3370/TCP
linstor-csi-controller/0* active idle 10.1.84.166
linstor-csi-node/45 active idle 10.1.247.46
linstor-csi-node/46* active idle 10.1.39.247
linstor-csi-node/47 active idle 10.1.84.169
linstor-ha-controller/0* active idle 10.1.247.38
linstor-satellite/0* active idle 192.168.222.21 3366/TCP
linstor-satellite/1 active idle 192.168.222.22 3366/TCP
linstor-satellite/2 active idle 192.168.222.20 3366/TCP
Because you joined nodes earlier and made MicroK8s highly available within your cluster, you only need to deploy LINSTOR on one of your nodes. MicroK8s will take care of replicating the deployment on the other two nodes. You can verify this by using either of verification commands above.
Getting Started With LINSTOR in MicroK8s
After all of the linbit
model’s applications and units are in an “active” state, you can verify LINSTOR’s state with some basic commands.
Setting the MicroK8s Context
First, set the MicroK8s current context to be the linbit
namespace, so that all subsequent microk8s.kubectl
commands will run in this namespace.
$ microk8s.kubectl config set-context --current --namespace linbit
Verifying LINSTOR Nodes
You can verify that LINSTOR is up and running inside your MicroK8s cluster and that all three nodes are participating, by entering the following command:
$ microk8s.kubectl exec -it deployment/linstor-controller -- linstor node list
Output from the command should be similar to this:
[...]
+-----------------------------------------------------------+
| Node | NodeType | Addresses | State |
|===========================================================|
| node-0 | SATELLITE | 192.168.222.20:3366 (PLAIN) | Online |
| node-1 | SATELLITE | 192.168.222.21:3366 (PLAIN) | Online |
| node-2 | SATELLITE | 192.168.222.22:3366 (PLAIN) | Online |
+-----------------------------------------------------------+
Listing LINSTOR Storage Pools
Use the storage-pool list
command to show your LINSTOR storage pools.
$ microk8s.kubectl exec -it deployment/linstor-controller -- linstor storage-pool list
Output from this command should show that you have one “DISKLESS” storage pool on each of your cluster nodes.
[...]
+-----------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|=========================================================================================================================================|
| DfltDisklessStorPool | node-0 | DISKLESS | | | | False | Ok | |
| DfltDisklessStorPool | node-1 | DISKLESS | | | | False | Ok | |
| DfltDisklessStorPool | node-2 | DISKLESS | | | | False | Ok | |
+-----------------------------------------------------------------------------------------------------------------------------------------+
Creating a Storage Pool by Using Juju
While you can use linstor physical-storage
and linstor storage-pool create
commands run inside the linstor-controller
container to create LINSTOR storage pools, it may be easier to create storage pools by using a Juju command run against the linstor-satellite
application.
Assuming that each of your nodes has a second unused disk, /dev/vdb
in this example, enter the following command:
$ juju config linstor-satellite storage-pools=provider=LVM_THIN,provider_name=linstor_thinpool/thinpool,name=thinpool,devices=/dev/vdb
NOTE: You can enter
juju config linstor-satellite
on its own to get a full list of configuration changes that you can make by using the command.
After a while, the linstor-satellite
applicatiion will create the storage pool that you specified. You can verify this by entering another linstor storage-pool list
command:
$ microk8s.kubectl exec -it deployment/linstor-controller -- linstor storage-pool list
[...]
+-----------------------------------------------------------------------------------------------------------------------------------------+
| StoragePool | Node | Driver | PoolName | FreeCapacity | TotalCapacity | CanSnapshots | State | SharedName |
|=========================================================================================================================================|
[...]
| thinpool | node-0 | LVM_THIN | linstor_thinpool/thinpool | 1.99 GiB | 1.99 GiB | True | Ok | |
| thinpool | node-1 | LVM_THIN | linstor_thinpool/thinpool | 1.99 GiB | 1.99 GiB | True | Ok | |
| thinpool | node-2 | LVM_THIN | linstor_thinpool/thinpool | 1.99 GiB | 1.99 GiB | True | Ok | |
+-----------------------------------------------------------------------------------------------------------------------------------------+
Making LINSTOR MicroK8s Default Storage
Creating a Storage Class
$ cat << EOF > linstor-storage.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: linstor-storage
provisioner: linstor.csi.linbit.com
allowVolumeExpansion: true
volumeBindingMode: WaitForFirstConsumer
parameters:
autoPlace: "3"
storagePool: thinpool
EOF
Applying Storage Class to MicroK8s Deployment
$ microk8s.kubectl apply -f linstor-storage.yaml
You should get output that the storage class was created.
storageclass.storage.k8s.io/linstor-storage created
Verifying Storage Class
You can verify the newly created storage class by using a get
command.
$ microk8s.kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
microk8s-hostpath (default) microk8s.io/hostpath Delete WaitForFirstConsumer false 6h22m
linstor-storage linstor.csi.linbit.com Delete WaitForFirstConsumer true 4m37s
Removing the Default Annotation from Hostpath Storage
Remove the default
annotation from the microk8s-hostpath
storage class by entering the following command:
$ microk8s.kubectl patch storageclass microk8s-hostpath -p \
'{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}'
Making LINSTOR Storage the Default Storage Class
Next, make the linstor-storage
class the default storage class in your MicroK8s deployment by entering the following command:
$ microk8s.kubectl patch storageclass linstor-storage -p \
'{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Verifying Storage Classes
Verify that linstor-storage
is the default storage class by using another get
command. Output from the command should show that the linstor-storage
storage class is the default storage class, and that the microk8s-hostpath
storage class is no longer default.
$ microk8s.kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
linstor-storage (default) linstor.csi.linbit.com Delete WaitForFirstConsumer true 6h28m
microk8s-hostpath microk8s.io/hostpath Delete WaitForFirstConsumer false 10m42s
Testing New Storage Class
After deploying LINSTOR in MicroK8s, creating a LINSTOR storage pool that is replicated on all three of your cluster nodes, and making this storage pool the default storage class in MicroK8s, it is time to test your new storage.
Creating a Persistent Volume Claim
To test your new default storage class, first create a persistent volume claim (PVC) configuration file.
$ cat << EOF > linstor-pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-linstor-storage-pvc
spec:
accessModes: [ReadWriteOnce]
resources:
requests:
storage: 500Mi
EOF
NOTE: This configuration does not include a
storageClassName
in thespec
section. If not specified, MicroK8s will use the default storage class.
Applying the Persistent Volume Claim
After creating a configuration file for your PVC, apply it to your MicroK8s deployment by entering the following command:
$ microk8s.kubectl apply -f - < linstor-pvc.yaml
persistentvolumeclaim/my-linstor-storage-pvc created
Verifying the Persistent Volume Claim
You can verify your PVC’s deployment by using a get
command.
$ microk8s.kubectl get persistentvolumeclaims
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
[...]
my-linstor-storage-pvc Pending linstor-storage 49s
You can also use a describe
command to verify that the PVC is using the default linstor-storage
storage class.
$ microk8s.kubectl describe persistentvolumeclaims my-linstor-storage-pvc
Name: my-linstor-storage-pvc
Namespace: linbit
StorageClass: linstor-storage
Status: Pending
Volume:
Labels: <none>
Annotations: <none>
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Used By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal WaitForFirstConsumer 4s (x16 over 3m45s) persistentvolume-controller waiting for first consumer to be created before binding
Conclusion
Congratulations! You can now use your new PVC to create stateful workloads. Your workloads’ data, stored using LINSTOR-managed storage, will be replicated using DRBD across each of your three nodes, making it perfect for high-availability uses.
Next, you can experiment with creating a pod that will use your PVC. LINBIT’s Integrating LINBIT SDS with Kubernetes quick start guide has instructions for doing this in Kubernetes that you can adapt and use in MicroK8s.
1. From the Juju documentation: “Juju recognises when MicroK8s is installed and automatically sets up a cloud called microk8s
. There is no need to manually add the cluster to Juju (verify this with juju clouds --local
). A controller can then be created just like a normal cloud.”
2. You can read more about high-availability within MicroK8s here.