Introduction to the LINSTOR APIs
One of the most notable but easily overlooked features in LINSTOR® is its REST API. Typically, LINSTOR users interact with their LINSTOR clusters by using the LINSTOR command line client utility (also known as the linstor-client) to perform administrative tasks. These tasks include operations such as adding LINSTOR satellite nodes to a cluster (linstor node create ...), managing LINSTOR resources (linstor resource delete ...), and listing various objects within a LINSTOR cluster (linstor storage-pool list). LINSTOR users might know that the linstor-client is a Python utility that interacts with LINSTOR’s REST API by using a Python library named python-linstor.
The python-linstor library is essential for LINBIT® developers working on the linstor-client to expose new LINSTOR features by adding subcommands and options to the linstor-client. The python-linstor library and LINSTOR’s Python API documentation also provide users familiar with Python a head start in creating custom applications that interface with LINSTOR clusters. Similar to the python-linstor Python library, LINSTOR also has the java-linstor Java library, and the golinstor Go library, for easier integration with LINSTOR for those languages. Of course, you can skip the libraries all together and use the LINSTOR REST API directly. For that you can find the LINSTOR REST API documentation here.
After stumbling across this LINBIT blog post written by LINBIT developer and DRBD Reactor creator, Roland Kammerer, in 2019 announcing the LINSTOR Python API, I felt inspired to explore the topic with modern LINBIT tools and supported platforms, specifically the LINSTOR Operator for Kubernetes. So, in this blog post I will step through creating a simple containerized Python application that uses the python-linstor library to interface with the LINSTOR cluster, as deployed by the LINSTOR Operator, in a Kubernetes cluster.
Of course, if you intend to follow along at home, you’ll need to a Kubernetes cluster with LINSTOR deployed into it. If you need help with that, check out the LINSTOR User Guide section for integrating LINSTOR with Kubernetes, or the Kubernetes Persistent Storage Using LINBIT SDS Quick Start how-to guide from LINBIT. Another good resource for spinning up Kubernetes quickly is minikube, which this LINBIT blog post can help you get started.
Creating and deploying a containerized python-linstor application
With that background and introduction out of the way, I can move on to the fun stuff: showing how you can create and deploy a containerized Python application to interface with LINSTOR in Kubernetes by using the python-linstor library.
Creating a container registry for Kubernetes to pull container images
Before you start building container images, you want a place to store them that isn’t directly on the Kubernetes hosts, or public container registries, at least while you’re developing them.
Creating a local registry on your development PC is very easy. You will only need to have docker, or podman with podman-docker, installed and you can run a single command to run a container registry on your localhost:
docker run -d -p 5000:5000 --name registry registry:2
💡 TIP: If you’re interested in configuring a Docker registry with trusted certificates, review the knowledge base (KB) article that covers that topic in the LINBIT KB.
Then, configuring the Kubernetes workers so they can pull images from the insecure registry is also a single command run on each Kubernetes worker:
sudo tee /etc/containers/registries.conf.d/5000-insecure.conf <<EOF
[[registry]]
location="localhost:5000" # change "localhost" to your registry IP address
insecure=true
EOF
Now, your Kubernetes workers should be able to pull container images from your local registry.
Creating a containerized application that uses python-linstor
A quick idea I had was to count the number of resource assignments on each node in a LINSTOR cluster. While simply counting resources isn’t extremely useful, it’s a simple task that only needs two lists from LINSTOR, a node list and a resource list. It also demonstrates just how easy it is to get started with using the python-linstor library.
Create a project directory and cd into it. This directory contains the Python application and the Dockerfile used to build the container image:
mkdir linstor-python-k8s-fun
cd linstor-python-k8s-fun/
Inside of the directory, write the Python application to a file named linstor-res-count.py:
cat << EOF > linstor-res-count.py
# Import the LINSTOR library, as well as some others used to prettify our outputs
import linstor
import urllib3
from tabulate import tabulate
import time
import threading
from http.server import SimpleHTTPRequestHandler, HTTPServer
import os
# Disable warnings for unverified certificates
urllib3.disable_warnings()
# LINSTOR controller's Kubernetes service (DNS name) as configured by LINSTOR Operator
LINSTOR_CTRL = "linstor://linstor-controller.linbit-sds.svc.cluster.local"
# Some file to store the outputs
OUTPUT_FILE = "/data/status.txt"
# Generate and write table of node name, address, status, and resource count every 5s
def generate_status():
while True:
try:
with linstor.Linstor(LINSTOR_CTRL) as lin:
node_list_resp = lin.node_list()[0]
nodes = node_list_resp.nodes
resource_list_resp = lin.resource_list()[0]
resources = resource_list_resp.resources
table_data = []
for node in nodes:
res_count = sum(1 for r in resources if r.node_name == node.name)
name = node.name
addr = node.net_interfaces[0].address if node.net_interfaces else "N/A"
status = node.connection_status
table_data.append([name, addr, status, res_count])
table = tabulate(
table_data,
headers=["Node Name", "Address", "Status", "Resource Count"],
tablefmt="pretty"
)
with open(OUTPUT_FILE, "w") as f:
f.write(table + "\n")
except Exception as e:
with open(OUTPUT_FILE, "w") as f:
f.write(f"Error: {e}\n")
time.sleep(5)
# Start a simple webserver on port 8123 serving from /data/
def start_http_server():
os.chdir("/data")
server = HTTPServer(("0.0.0.0", 8123), SimpleHTTPRequestHandler)
server.serve_forever()
# Main loop updates /data/status.txt in the background and runs webserver in the foreground
if __name__ == "__main__":
# Background generator thread
updater = threading.Thread(target=generate_status, daemon=True)
updater.start()
# Foreground http server
start_http_server()
EOF
The application code above should work for your LINSTOR cluster if you haven’t changed the Kubernetes namespace the LINSTOR Operator deployed the LINSTOR cluster into from its default namespace, linbit-sds.
💡 TIP: For production LINSTOR clusters and API integration, configuring authentication between LINSTOR clients and the LINSTOR REST API is essential. Review the LINSTOR User Guide section for the REST API for information on securing the LINSTOR REST API.
Create the Dockerfile you will use to build the container image:
cat << EOF > Dockerfile
FROM python:3.11
# Install dependencies
RUN pip install --no-cache-dir \
python-linstor \
tabulate \
urllib3
# App directory
WORKDIR /app
# Copy application
COPY linstor-res-count.py /app/linstor-res-count.py
# Create output directory
RUN mkdir -p /data
# Expose the HTTP port
EXPOSE 8123
# Run the status generator + HTTP server
CMD ["python", "/app/linstor-res-count.py"]
EOF
Some important things to note in the Dockerfile:
- Uses the
python:3.11container as the base image because of compatibility with thepython-linstorlibrary. - Uses Pip to install
python-linstor,tabulate, andurllib3during the build process. - Sets the
EXPOSEandCMDin the Dockerfile even though they’re not strictly needed here if you plan to defineportsorcommandwithin the Kubernetes manifests.
Finally, build the container image and push it into the local registry, substituting localhost for the IP address of your registry:
docker build -t localhost:5000/linstor-res-count:latest .
docker push localhost:5000/linstor-res-count:latest
Deploying the containerized application into Kubernetes
Finally, you can deploy the containerized application into Kubernetes. Because there is no true state that needs to be maintained, you can use a Deployment and simple ClusterIP type Service to run the application in Kubernetes.
Create the Kubernetes manifest by entering the following command:
cat << EOF > linstor-res-count.yaml
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: linstor-res-count
labels:
app: linstor-res-count
spec:
replicas: 1
selector:
matchLabels:
app: linstor-res-count
template:
metadata:
labels:
app: linstor-res-count
spec:
containers:
- name: linstor-res-count
image: localhost:5000/linstor-res-count:latest
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8123
---
apiVersion: v1
kind: Service
metadata:
name: linstor-res-count
labels:
app: linstor-res-count
spec:
selector:
app: linstor-res-count
ports:
- name: http
port: 8123
targetPort: 8123
type: ClusterIP
EOF
You’ll need to update localhost in the Deployment template to the IP of your registry. To keep things simple, there’s no readinessProbe or livenessProbe on the deployment, and the configuration does not define the command the container should run. This means the CMD from the Dockerfile will be run when the container starts and will immediately report as “Ready”.
Apply the YAML manifest to Kubernetes, and watch for the Deployment to become “Ready”:
kubectl apply -f linstor-res-count.yaml
deployment.apps/linstor-res-count created
service/linstor-res-count created
kubectl get deployments -w
NAME READY UP-TO-DATE AVAILABLE AGE
linstor-res-count 0/1 1 0 0s
linstor-res-count 1/1 1 1 10s
Once ready, you should be able to use the following command to see your application is producing the status.txt outputs:
CLUSTER_IP=$(kubectl get svc linstor-res-count -o jsonpath='{.spec.clusterIP}')
curl "http://${CLUSTER_IP}:8123/status.txt"
+-----------+---------------+--------+----------------+
| Node Name | Address | Status | Resource Count |
+-----------+---------------+--------+----------------+
| kube-0 | 172.16.145.73 | ONLINE | 0 |
| kube-1 | 172.16.126.68 | ONLINE | 2 |
| kube-2 | 172.16.79.134 | ONLINE | 3 |
+-----------+---------------+--------+----------------+
Or, you can use the service’s DNS name from within pods running in the Kubernetes cluster:
root@kube-0:~# kubectl exec -it client-pod -- \
curl http://linstor-res-count.default.svc.cluster.local:8123/status.txt
+-----------+---------------+--------+----------------+
| Node Name | Address | Status | Resource Count |
+-----------+---------------+--------+----------------+
| kube-0 | 172.16.145.73 | ONLINE | 0 |
| kube-1 | 172.16.126.68 | ONLINE | 2 |
| kube-2 | 172.16.79.134 | ONLINE | 3 |
+-----------+---------------+--------+----------------+
You can adapt this workflow to create powerful containerized applications that enhance your Kubernetes integrated LINSTOR deployments.
Conclusion
LINSTOR APIs are a core part of the LINSTOR ecosystem, not an afterthought. The same REST interface and libraries that power the LINBIT linstor-client are available to LINBIT power users, making developing and integrating custom tools into LINSTOR powered systems much easier. The LINBIT GUI for LINSTOR and the XOSTOR driver for XCP-ng are two examples of using the LINSTOR API to build something powerful and useful. Whether you prefer to use Python, Java, Go, or more direct REST calls, LINSTOR exposes the full state and capabilities of the LINSTOR cluster to its power users.
If this blog post has left you inspired to create something useful (or fun!) using the LINSTOR APIs, we’d love to hear about it in the LINBIT community forums!