Create a Highly Available iSCSI Target Using a LINSTOR Gateway Cluster

This tutorial will teach you how to set up a highly available iSCSI target by using LINSTOR Gateway.

Here is the ideal starting point to help you get off the ground as a new LINSTOR Gateway user. The basic principles from this tutorial also apply to using LINSTOR Gateway to set up NVMe-oF or NFS exports.

This can be a way that you can achieve high availability for infrastructure as a service (IaaS) datastores by using open source LINBIT® software, even for IaaS platforms that do not directly support highly available storage through a LINBIT integration driver. An example of such a platform might be VMware vSphere.

Your Working Environment

One of the most critical steps of any setup is preparation. So first ensure that your environment is suitable for a LINSTOR Gateway deployment.

You will need three Linux-based machines, either virtual or physical. Each machine should have one disk for the operating system and one for your highly available data. This is a basic setup for simplifying the tutorial. A more advanced deployment in a production environment might involve multiple disks in a RAID setup to back your highly available data.

LINBIT officially supports running LINSTOR Gateway on a few major Linux distributions, including the most recent versions of Red Hat Enterprise Linux (RHEL), AlmaLinux, and Ubuntu LTS. You can find a full list of supported distributions on the our website.

For this tutorial, you will be working with three virtual machines (VMs) with the following specifications:

  • 2 CPU cores
  • 2 GiB of RAM
  • A 10 GiB disk for the operating system
  • A 20 GiB disk for the highly available data store
  • An Ubuntu 24.04 operating system

Adding the LINBIT Repository

LINBIT, the company behind DRBD® and LINSTOR®, publishes unofficial packages for Ubuntu users through a PPA repository.

To add the repository to your system, enter the following commands as the root user:

# add-apt-repository -y ppa:linbit/linbit-drbd9-stack
# apt-get update

📝 NOTE: LINBIT does not maintain these packages to the same rigorous standards as their official repositories.

For production deployments, using the official LINBIT repositories for downloading prebuilt LINBIT software packages is recommended. You can get access to those – and many more benefits such as 24×7 enterprise support – by becoming a LINBIT customer.

The first thing you will have to install are some dependencies. You will need these later. For now just enter the following command on all nodes:

# apt -y install resource-agents-extra targetcli-fb

❗ IMPORTANT: If your systems are running Ubuntu 22.04, enter the following command rather than the one shown above:

  # apt -y install resource-agents targetcli-fb

After installing these packages, you might get a warning message about a kernel update and rebooting your nodes. If so, reboot your nodes when you are able to safely do so, and then continue with the instructions.

Installing the targetcli-fb package is shown in this tutorial so that you can use LIO as the iSCSI back end implementation with LINSTOR Gateway. It is possible to use the SCST iSCSI implementation in Linux rather than LIO. Setting up SCST involves more steps but it might be necessary for LINSTOR Gateway to work with some iSCSI initiators, such as VMware’s ESXi initiator.

Installing DRBD

The Distributed Replicated Block Device – or DRBD – is the most fundamental requirement for every LINSTOR Gateway cluster. It is the bottom-most layer in the software stack you will build.

You do not need to understand all its intricacies to follow this tutorial. For now, just know that DRBD essentially mirrors your data from one node to the others.

To learn more about DRBD and how it works, refer to the DRBD9 User’s Guide.

Install the kernel module and the user space utilities on all nodes:

# apt -y install drbd-dkms drbd-utils

Verify that the kernel module was installed correctly:

# modprobe drbd
# cat /proc/drbd
version: 9.2.10 (api:2/proto:86-122)
GIT-hash: b92a320cb72a0b85144e742da5930f2d3b6ce30c build by root@reactornfs-2, 2024-08-06 21:05:32
Transports (api:21):

Setting up a LINSTOR Cluster

As the naming might suggest, LINSTOR is another crucial prerequisite for a LINSTOR Gateway deployment. As with DRBD, the details on its inner workings are not important here. For the purposes of this tutorial, think of LINSTOR as an orchestrator for DRBD resources.

Satellite

In LINSTOR’s language, a satellite is a node which can host and replicate storage. In your case, your three machines should all become satellites so that LINSTOR can manage their storage.

Enter the following command on all nodes to install LINSTOR’s satellite component:

# apt -y install linstor-satellite

Verify that the service was correctly started and enabled:

# systemctl status linstor-satellite

Output will be similar to the following:

● linstor-satellite.service - LINSTOR Satellite Service
     Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; preset: enabled)
     Active: active (running) since Tue 2024-08-06 21:13:54 UTC; 16h ago
   Main PID: 8385 (java)
      Tasks: 39 (limit: 1112)
     Memory: 151.7M (peak: 153.4M)
        CPU: 2min 28.056s
     CGroup: /system.slice/linstor-satellite.service
[...]

For LINSTOR Gateway to work, you need to adapt some of the LINSTOR satellite service’s configuration options.

First, create a directory for LINSTOR configuration files:

# mkdir -p /etc/linstor

Then, create and edit the file /etc/linstor/linstor_satellite.toml on all nodes. Paste the following content into this file:

[files]
allowExtFiles = [
    "/etc/systemd/system",
    "/etc/systemd/system/linstor-satellite.service.d",
    "/etc/drbd-reactor.d"
]

Finally, restart the LINSTOR satellite service for the configuration changes to take effect.

# systemctl restart linstor-satellite

Installing the LINSTOR Controller Service and Client

In a LINSTOR cluster, there is always exactly one Controller. This is the control center for the whole cluster, where resources are created and coordinated.

📝 NOTE: Perhaps counterintuitively, a node can be a satellite and a controller at the same time. In LINSTOR-speak, such a node is described as a combined node.

Choose any one of your machines to host the LINSTOR controller service. Enter the following commands only on the controller node to install, enable, and start the LINSTOR controller service:

# apt -y install linstor-controller
# systemctl enable --now linstor-controller

You also want to be able to communicate with the LINSTOR controller service so that you can enter commands to set up your cluster. You can do this through the LINSTOR client, so install that too:

# apt -y install linstor-client

Your LINSTOR controller service is now up and running and with the LINSTOR client installed, you can interact with the controller service to configure LINSTOR. Try it out by using a LINSTOR client command to list the nodes in your LINSTOR cluster:

# linstor node list

Output will show a table listing the nodes in your LINSTOR cluster.

╭─────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞═════════════════════════════════════╡
╰─────────────────────────────────────╯

It works! Unfortunately, you have not yet registered any nodes so the table is empty. You will change that soon.

Building the Cluster

After finishing the required installation steps, you can now start to build your LINSTOR cluster. In the next steps, you will add nodes and a storage pool to the cluster.

Adding the Nodes to LINSTOR

If you want LINSTOR to manage your storage, you first have to tell it where the storage is.

You will now turn your loose set of machines into an actual cluster. The first step is to make LINSTOR aware of your satellite nodes, starting with the controller node itself which will also act as a satellite node in your cluster. Run this command on the controller node, substituting your node’s hostname for alfa and its IP address for 192.168.122.50:

# linstor node create alfa 192.168.122.50

📝 IMPORTANT: LINSTOR expects the node name you specify here to equal the hostname of that machine. Check the hostname of the machine with uname -n and verify that it matches what you supplied to linstor node create.

Output from the command will show various success messages and information related to creating the LINSTOR node.

SUCCESS:
Description:
    New node 'alfa' registered.
Details:
    Node 'alfa' UUID is: 98a19544-55ef-41db-aafb-5f9808469742
SUCCESS:
Description:
    Node 'alfa' authenticated
Details:
[...]

Repeatedly enter this command on the controller node, substituting the hostname and IP address of each of your other VMs. In the end, enter another linstor node list command. Your node list should be similar to this:

╭────────────────────────────────────────────────────────────╮
┊ Node    ┊ NodeType  ┊ Addresses                   ┊ State  ┊
╞════════════════════════════════════════════════════════════╡
┊ alfa    ┊ SATELLITE ┊ 192.168.122.50:3366 (PLAIN) ┊ Online ┊
┊ bravo   ┊ SATELLITE ┊ 192.168.122.51:3366 (PLAIN) ┊ Online ┊
┊ charlie ┊ SATELLITE ┊ 192.168.122.52:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────╯

Creating the Storage Pools

Now LINSTOR knows which machines belong to its cluster. The next step is to tell LINSTOR what storage it is supposed to manage from these nodes.

To do that, you first have to create that storage. Remember how you attached a separate 20 GiB disk to the machines for LINSTOR to use? You will now create an LVM volume group from this disk.

📝 NOTE: LINSTOR also supports using ZFS logical volumes if that might be a better fit for your use case.

Enter the following command on every machine in the cluster, substituting the actual path of your storage device for /dev/sdX.

# vgcreate storage /dev/sdX

This command will implicitly create an LVM physical volume on the device, if one does not exist already. You now have a volume group called storage on all of your nodes. The only thing left to do is to tell LINSTOR about it.

Enter the following command on the controller node:

# linstor storage-pool create lvm alfa gateway-storage storage

Output from the command will show various success messages and information related to creating the storage pool.

SUCCESS:
    Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
    New storage pool 'gateway-storage' on node 'alfa' registered.
Details:
    Storage pool 'gateway-storage' on node 'alfa' UUID is: e48ed53e-61aa-48e9-ba90-b9cc2ae21bc9
SUCCESS:
    (alfa) Changes applied to storage pool 'gateway-storage'

You have now created a storage pool called gateway-storage from your LVM volume group. Repeat this command for every node in your LINSTOR cluster. You can list the storage pools in your cluster by entering the following command:

# linstor storage-pool list

Output from the command will show a table listing your storage pools. It will be similar to this:

╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node    ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ alfa    ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ bravo   ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ charlie ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ gateway-storage      ┊ alfa    ┊ LVM      ┊ storage  ┊    20.00 GiB ┊     20.00 GiB ┊ False        ┊ Ok    ┊            ┊
┊ gateway-storage      ┊ bravo   ┊ LVM      ┊ storage  ┊    20.00 GiB ┊     20.00 GiB ┊ False        ┊ Ok    ┊            ┊
┊ gateway-storage      ┊ charlie ┊ LVM      ┊ storage  ┊    20.00 GiB ┊     20.00 GiB ┊ False        ┊ Ok    ┊            ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Your LINSTOR cluster is now fully set up.

Installing DRBD Reactor

DRBD Reactor is another important component of a LINSTOR Gateway cluster. While LINSTOR manages the storage itself, DRBD Reactor coordinates highly available services that use this storage.

To learn more about DRBD Reactor, refer to its chapter in the DRBD9 User’s Guide.

Install DRBD Reactor in your cluster by entering the following command on all nodes:

# apt -y install drbd-reactor

As always, you want to verify that the service is actually started and enabled:

# systemctl status drbd-reactor

Output from the command will show that the service is enabled and running.

● drbd-reactor.service - DRBD-Reactor Service
     Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; preset: enabled)
     Active: active (running) since Wed 2024-08-07 15:12:24 UTC; 34s ago
       Docs: man:drbd-reactor
             man:drbd-reactorctl
             man:drbd-reactor.toml
   Main PID: 10976 (drbd-reactor)
      Tasks: 5 (limit: 1112)
     Memory: 1000.0K (peak: 1.0M)
        CPU: 3ms
     CGroup: /system.slice/drbd-reactor.service
             ├─10976 /usr/bin/drbd-reactor
             └─10983 drbdsetup events2 --full --poll

To ensure smooth operation with LINSTOR Gateway, you also want to configure DRBD Reactor so that it reloads itself when its configuration changes. Do this by entering the following commands on all nodes:

# cp /usr/share/doc/drbd-reactor/examples/drbd-reactor-reload.{path,service} /etc/systemd/system/
# systemctl enable --now drbd-reactor-reload.path

Installing LINSTOR Gateway

Now you can install the final component in your high-availability cluster: LINSTOR Gateway.

LINSTOR Gateway manages LINSTOR resources and DRBD Reactor services, ensuring highly available access to the underlying storage.

To install LINSTOR Gateway, go to its “Releases” page on GitHub, download the latest build on the controller node, and add executable permissions to it:

# wget -O /usr/sbin/linstor-gateway https://github.com/LINBIT/linstor-gateway/releases/latest/download/linstor-gateway-linux-amd64
# chmod +x /usr/sbin/linstor-gateway

📝 NOTE: As with DRBD and LINSTOR, LINSTOR Gateway is also available from LINBIT’s repositories for customers with a support subscription. Users in production and enterprise environments should install LINBIT software using prebuilt packages from the official repositories.

LINSTOR Gateway has a server component which needs to be running in the background. You will use the following systemd service to accomplish that task:

[Unit]
Description=LINSTOR Gateway
After=network.target

[Service]
ExecStart=/usr/sbin/linstor-gateway server --addr ":8080"

[Install]
WantedBy=multi-user.target

Create the file /etc/systemd/system/linstor-gateway.service on the controller node and paste the above content into it to create the service.

Then, you need to start and enable LINSTOR Gateway:

# systemctl daemon-reload
# systemctl enable --now linstor-gateway

Verify that all the dependencies correctly align with LINSTOR Gateway’s built-in health check. On the controller node, enter the following command:

# linstor-gateway check-health --iscsi-backends lio

❗ IMPORTANT: Because this tutorial uses LIO as an iSCSI back end and LINSTOR Gateway checks for SCST as a back end by default, you need to add the --iscsi-backends lio command argument.

Output from the command will be similar to the following:

[✓] LINSTOR
[✓] drbd-reactor
[✓] Resource Agents
[✓] iSCSI
[!] NVMe-oF
    ✗ The nvmetcli tool is not available
      exec: "nvmetcli": executable file not found in $PATH
      Please install the nvmetcli package
    ✗ Kernel module nvmet is not loaded
      Execute modprobe nvmet or install package nvmetcli
[!] NFS
    ✗ Service nfs-server.service is in the wrong state (not loaded).
      This systemd service conflicts with LINSTOR Gateway.
      It needs to be loaded, but not started.
      Make sure that:
      • the nfs-server package is installed
      • the nfs-server.service systemd unit is stopped and disabled
      Execute systemctl disable --now nfs-server.service to disable and stop the service.

FATA[0000] Health check failed: found 2 issues

Two items in the health check failed, but they are related to NVMe-oF and NFS. Because you are only using iSCSI in this tutorial, you can safely ignore these errors. Everything else seems to be set up correctly.

You can also verify that the LINSTOR Gateway server is now running by entering the following command:

# linstor-gateway iscsi list

Output will list your LINSTOR Gateway-created iSCSI targets.

+-----+------------+---------------+-----+---------------+
| IQN | Service IP | Service state | LUN | LINSTOR state |
+-----+------------+---------------+-----+---------------+
+-----+------------+---------------+-----+---------------+

Success! You do not have any iSCSI targets yet, but you will create one soon.

Creating an iSCSI Target

Now that all the prerequisites are set up, you can get to the exciting part. You will now create a highly available iSCSI target by using LINSTOR Gateway.

Thanks to the powerful technology stack you just built, this is now almost trivial. You can do this by entering the command that follows on the controller node.

You will have to choose your own iSCSI Qualified Name (IQN) and a service IP. The service IP should not already be in use by another machine. The service IP address will be dynamically assigned (by a DRBD Reactor-controlled OCF resource agent) to whichever node is currently hosting the iSCSI target. This way, an iSCSI initiator can reach the target at a consistent address, regardless of which node is hosting the service.

# linstor-gateway iscsi create iqn.2001-09.com.linbit:my-target 192.168.122.222/24 10G

After entering the iscsi create command, LINSTOR Gateway will do some work in the background. After a few seconds, output will show that LINSTOR Gateway successfully created your iSCSI target.

Created iSCSI target 'iqn.2001-09.com.linbit:my-target'

In this example, you created a 10 GiB volume.

And that’s it!

List your iSCSI targets again.

# linstor-gateway iscsi list

Output from the command will now show the target in the iSCSI target list and on which node the service is running:

+----------------------------------+--------------------+----------------+-----+---------------+
|               IQN                |     Service IP     | Service state  | LUN | LINSTOR state |
+----------------------------------+--------------------+------------- --+-----+---------------+
| iqn.2001-09.com.linbit:my-target | 192.168.122.222/24 | Started (alfa) |   1 | OK            |
+----------------------------------+--------------------+----------------+-----+---------------+

📝 NOTE: The LINSTOR state might show as Degraded for a while after creating the target. This is because DRBD is still synchronizing the data between the nodes. The target stays accessible during this process, and the LINSTOR state will eventually change to OK on its own.

You can check the synchronization progress by entering a linstor resource list command.

You can now connect to the iSCSI target by specifying its service IP. Consuming the exported storage with an iSCSI initiator is outside the scope of this guide, but this topic is well-documented elsewhere. For example, you can refer to Red Hat documentation or VMware documentation.

Conclusion and Next Steps

The primary benefit of a highly available iSCSI target is if the node that is hosting the iSCSI target fails, the service migrates to another cluster node automatically, and the target remains accessible.

After using LINSTOR Gateway to configure an iSCSI target, you might also be interested in configuring NVMe-oF targets or NFS exports. This guide’s general principles and steps also apply to both of those use cases.

For further reading, refer to the LINSTOR Gateway chapter of the LINSTOR User’s Guide and the open source project’s GitHub repository.


Changelog

2023-03-23:

  • Originally published post

2024-08-14:

  • Updated to use Ubuntu 24.04
  • Updated list of LINBIT supported OS versions
  • Updated instructions and explanations around LIO and SCST iSCSI back ends
  • Updated instructions around linstor-gateway check-health command
  • Updated output of linstor-gateway iscsi list: service state now shows on which node the iSCSI target is running
  • Made non-technical language updates and copy edits
Christoph Böhmwalder

Christoph Böhmwalder

Christoph has been working as a software engineer for LINBIT since 2019. While initially a part of the DRBD team, his focus has also shifted towards higher-level components of LINBIT's software stack. Today, his primary roles consist of coordinating DRBD development within the Linux kernel, as well as developing and maintaining the LINSTOR Gateway and LINBIT VSAN products.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.