Create a Highly Available iSCSI Target Using LINSTOR Gateway

This tutorial will teach you how to set up a highly available iSCSI target by using LINSTOR Gateway.

Here is the ideal starting point to help you get off the ground as a new user. The basic principles from this tutorial also apply to NVMe-oF and NFS exports.

Your Working Environment

One of the most critical steps of any setup is preparation. So let’s first ensure that your environment is suitable for a LINSTOR Gateway deployment.

You will need three Linux-based machines, either virtual or physical. They should have one disk for the operating system and one for your highly available data.

Currently, LINSTOR Gateway officially supports these Linux distributions:

  • Ubuntu 20.04 (Focal Fossa)
  • Ubuntu 22.04 (Jammy Jellyfish)
  • Red Hat Enterprise Linux 8.0 (and its derivatives)
  • Red Hat Enterprise Linux 9.0 (and its derivatives)

For this tutorial, you will be working with three virtual machines with the following specifications:

  • 2 CPU cores
  • 2 GiB of RAM
  • A 10 GiB disk for the operating system
  • A 20 GiB disk for the highly available data store
  • An Ubuntu 22.04 operating system

Adding the LINBIT Repository

LINBIT®, the company behind DRBD® and LINSTOR®, publishes unofficial packages for Ubuntu users through a PPA repository.

To add the repository to your system, execute the following commands as root:

# add-apt-repository -y ppa:linbit/linbit-drbd9-stack
# apt-get update

📝 Note: LINBIT does not maintain these packages to the same rigorous standards as their official repositories.

For production applications, we recommend using the official LINBIT repositories. You can get access to those – and many more benefits such as 24-hour enterprise support – by becoming a LINBIT customer.

The first thing you will have to install are some dependencies. You will need these later; for now just run the following command on all nodes:

# apt -y install resource-agents targetcli-fb

💡 Tip: After installing these packages, you may get a warning message about a kernel update and rebooting your nodes. If so, reboot your nodes when you are able to safely do so, and then continue with the instructions.

Installing DRBD

The Distributed Replicated Block Device – or DRBD – is the most fundamental requirement for every LINSTOR Gateway cluster. It is the bottom-most layer in the software stack you will build.

You do not need to understand all its intricacies to follow this tutorial. For now, just know that DRBD essentially mirrors your data from one node to the others.

To learn more about DRBD and how it works, refer to The DRBD9 User’s Guide.

Install the kernel module and the user space utilities on all nodes:

# apt -y install drbd-dkms drbd-utils

Verify that the kernel module was installed correctly:

# modprobe drbd
# cat /proc/drbd
version: 9.2.2 (api:2/proto:86-121)
GIT-hash: 8435da3ec2a0a70dee0fedf354276d6f1c6ba708 build by localhost, 2023-03-06 14:31:31
Transports (api:18):

Setting up a LINSTOR Cluster

As the naming might suggest, LINSTOR is another crucial prerequisite for a LINSTOR Gateway deployment. As with DRBD, the details on its inner workings are not important here. For the purposes of this tutorial, think of LINSTOR as an orchestrator for DRBD resources.

Satellite

In LINSTOR’s language, a Satellite is a node which can host and replicate storage. In your case, your three machines should all become satellites so that LINSTOR can manage their storage.

Run the following on all nodes to install LINSTOR’s Satellite component:

# apt -y install linstor-satellite

Verify that the service was correctly started and enabled:

# systemctl status linstor-satellite
● linstor-satellite.service - LINSTOR Satellite Service
   Loaded: loaded (/usr/lib/systemd/system/linstor-satellite.service; enabled; vendor preset: disabled)
   Active: active (running) since Mon 2023-03-06 12:53:31 UTC; 2 days ago
 Main PID: 1284 (java)
    Tasks: 36 (limit: 12139)
   Memory: 317.2M
   CGroup: /system.slice/linstor-satellite.service
           ├─1284 java -Xms32M -XX:MinHeapFreeRatio=40 -XX:MaxHeapFreeRatio=70 -classpath /usr/share/linstor-server/lib/conf:/usr/share/linstor-server/lib/* com.linbit.linstor.core.Satellite --logs=/var/log/linstor-satellite --config-directory=/etc/linstor
           └─1583 drbdsetup events2 all

In order for LINSTOR Gateway to work, you need to adapt some of the Satellite’s configuration options.

First, create a directory for LINSTOR configuration files:

# mkdir -p /etc/linstor

Then, create and edit the file /etc/linstor/linstor_satellite.toml on all nodes. Paste the following content into this file:

[files]
allowExtFiles = [
    "/etc/systemd/system",
    "/etc/systemd/system/linstor-satellite.service.d",
    "/etc/drbd-reactor.d"
]

Finally, restart the LINSTOR Satellite in order for the configuration changes to take effect.

# systemctl restart linstor-satellite

Controller

In a LINSTOR cluster, there is always exactly one Controller. This is the control center for the whole cluster, where resources are created and coordinated.

📝 Note: Perhaps counter-intuitively, a node can be a Satellite and a Controller at the same time.

Choose any one of your machines to host the LINSTOR controller. Execute the following steps only on the controller node to install, enable, and start the LINSTOR Controller component:

# apt -y install linstor-controller
# systemctl enable --now linstor-controller

You also want to be able to communicate with the LINSTOR Controller so that you can set up your cluster. This can be done through the LINSTOR Client, so install that too:

# apt -y install linstor-client

Your LINSTOR Controller should now be up and running. Try it out:

# linstor node list
╭─────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞═════════════════════════════════════╡
╰─────────────────────────────────────╯

It works! Unfortunately, you have not yet registered any nodes, but you will change that soon.

Building the Cluster

After finishing the required installation steps, you can now start to build your LINSTOR cluster. In the next steps, you will add nodes and a storage pool to the cluster.

Adding the Nodes to LINSTOR

If you want LINSTOR to manage your storage, you first have to tell it where the storage is.

You will now turn your loose set of machines into an actual cluster. The first step is to make LINSTOR aware of your satellite nodes, starting with the controller node itself. Run this command on the controller node, substituting your node’s hostname for alfa and its IP address for 192.168.122.50:

# linstor node create alfa 192.168.122.50
SUCCESS:
Description:
    New node 'alfa' registered.
Details:
    Node 'alfa' UUID is: 98a19544-55ef-41db-aafb-5f9808469742
SUCCESS:
Description:
    Node 'alfa' authenticated
Details:
    Supported storage providers: [diskless, lvm, lvm_thin, file, file_thin, remote_spdk, openflex_target]
    Supported resource layers  : [drbd, luks, writecache, cache, bcache, storage]
    Unsupported storage providers:
        ZFS: 'cat /sys/module/zfs/version' returned with exit code 1
        ZFS_THIN: 'cat /sys/module/zfs/version' returned with exit code 1
        SPDK: IO exception occured when running 'rpc.py spdk_get_version': Cannot run program "rpc.py": error=2, No such file or directory
        EXOS: IO exception occured when running 'lsscsi --version': Cannot run program "lsscsi": error=2, No such file or directory
              '/bin/bash -c cat /sys/class/sas_phy/*/sas_address' returned with exit code 1
              '/bin/bash -c cat /sys/class/sas_device/end_device-*/sas_address' returned with exit code 1

    Unsupported resource layers:
        NVME: 'modprobe nvmet_rdma' returned with exit code 1
              'modprobe nvme_rdma' returned with exit code 1
        OPENFLEX: 'modprobe nvmet_rdma' returned with exit code 1
                  'modprobe nvme_rdma' returned with exit code 1

📝 Important: LINSTOR expects the node name you specify here to equal the hostname of that machine. Check the hostname of the machine with uname -n and verify that it matches what you supplied to linstor node create.

Repeatedly run this command on the controller node, substituting the hostname and IP address of each of your machines. In the end, your node list should look something like this:

# linstor node list
╭────────────────────────────────────────────────────────────╮
┊ Node    ┊ NodeType  ┊ Addresses                   ┊ State  ┊
╞════════════════════════════════════════════════════════════╡
┊ alfa    ┊ SATELLITE ┊ 192.168.122.50:3366 (PLAIN) ┊ Online ┊
┊ bravo   ┊ SATELLITE ┊ 192.168.122.51:3366 (PLAIN) ┊ Online ┊
┊ charlie ┊ SATELLITE ┊ 192.168.122.52:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────╯

Creating the Storage Pools

Now LINSTOR knows which machines belong to its cluster. The next step is to tell LINSTOR what storage it is supposed to manage from these nodes.

To do that, you first have to create that storage. Remember how you attached a separate 20 GiB disk to the machines for LINSTOR to use? You will now create an LVM volume group from this disk.

Run this command on every machine in the cluster, substituting the actual path of your storage device for /dev/sdX.

# vgcreate storage /dev/sdX

You now have a volume group called storage on all of your nodes. The only thing left to do is to tell LINSTOR about it.

Run this on the controller node:

# linstor storage-pool create lvm alfa gateway-storage storage
SUCCESS:
    Successfully set property key(s): StorDriver/StorPoolName
SUCCESS:
Description:
    New storage pool 'gateway-storage' on node 'alfa' registered.
Details:
    Storage pool 'gateway-storage' on node 'alfa' UUID is: e48ed53e-61aa-48e9-ba90-b9cc2ae21bc9
SUCCESS:
    (alfa) Changes applied to storage pool 'gateway-storage'

You have now created a storage pool called gateway-storage from your LVM volume group!

Repeat this for every node, so that the storage pool list looks something like this:

# linstor storage-pool list
╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool          ┊ Node    ┊ Driver   ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ alfa    ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ bravo   ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ DfltDisklessStorPool ┊ charlie ┊ DISKLESS ┊          ┊              ┊               ┊ False        ┊ Ok    ┊            ┊
┊ gateway-storage      ┊ alfa    ┊ LVM      ┊ storage  ┊    20.00 GiB ┊     20.00 GiB ┊ False        ┊ Ok    ┊            ┊
┊ gateway-storage      ┊ bravo   ┊ LVM      ┊ storage  ┊    20.00 GiB ┊     20.00 GiB ┊ False        ┊ Ok    ┊            ┊
┊ gateway-storage      ┊ charlie ┊ LVM      ┊ storage  ┊    20.00 GiB ┊     20.00 GiB ┊ False        ┊ Ok    ┊            ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

Your LINSTOR Cluster is now fully set up!

Installing DRBD Reactor

DRBD Reactor is another important component of a LINSTOR Gateway cluster. While LINSTOR manages the storage itself, DRBD Reactor coordinates highly available services that use this storage.

To learn more about DRBD Reactor, refer to its chapter in the DRBD User’s Guide.

Install it on all nodes by running the following command:

# apt -y install drbd-reactor

As always, you want to verify that the service is actually started and enabled:

# systemctl status drbd-reactor
● drbd-reactor.service - DRBD-Reactor Service
   Loaded: loaded (/usr/lib/systemd/system/drbd-reactor.service; enabled; vendor preset: disabled)
   Active: active (running) since Wed 2023-03-08 19:20:14 UTC; 12s ago
     Docs: man:drbd-reactor
           man:drbd-reactorctl
           man:drbd-reactor.toml
 Main PID: 37806 (drbd-reactor)
    Tasks: 9 (limit: 12139)
   Memory: 1.4M
   CGroup: /system.slice/drbd-reactor.service
           ├─37806 /usr/sbin/drbd-reactor
           └─37810 drbdsetup events2 --full --poll

To ensure smooth operation with LINSTOR Gateway, you also want to configure DRBD Reactor so that it reloads itself when its configuration changes. Do this by running the following commands on all nodes:

# cp /usr/share/doc/drbd-reactor/examples/drbd-reactor-reload.{path,service} /etc/systemd/system/
# systemctl enable --now drbd-reactor-reload.path

Installing LINSTOR Gateway

Now you can install the final component in your high-availability cluster: LINSTOR Gateway.

LINSTOR Gateway manages LINSTOR resources as well as DRBD Reactor services, ensuring highly available access to the underlying storage.

To install LINSTOR Gateway, go to its “Releases” page on GitHub, download the latest build on the controller node, and add executable permissions to it:

# wget -O /usr/sbin/linstor-gateway https://github.com/LINBIT/linstor-gateway/releases/latest/download/linstor-gateway-linux-amd64
# chmod +x /usr/sbin/linstor-gateway

📝 Note: As with DRBD and LINSTOR, this tool is also available from LINBIT’s repositories for customers with a support subscription. We recommend using these packages for production environments.

LINSTOR Gateway has a server component which needs to be running in the background. You will use the following systemd service to accomplish that task:

[Unit]
Description=LINSTOR Gateway
After=network.target

[Service]
ExecStart=/usr/sbin/linstor-gateway server --addr ":8080"

[Install]
WantedBy=multi-user.target

Create the file /etc/systemd/system/linstor-gateway.service on the controller node and paste the above content into it to create the service.

Then, you need to start and enable LINSTOR Gateway:

# systemctl daemon-reload
# systemctl enable --now linstor-gateway

Verify that all the dependencies are correctly in line with LINSTOR Gateway’s built-in health check. On the controller node, execute the following command:

# linstor-gateway check-health
[✓] LINSTOR
[✓] drbd-reactor
[✓] Resource Agents
[✓] iSCSI
[!] NVMe-oF
    ✗ The nvmetcli tool is not available
      exec: "nvmetcli": executable file not found in $PATH
      Please install the nvmetcli package
    ✗ Kernel module nvmet is not loaded
      Execute modprobe nvmet or install package nvmetcli
[!] NFS
    ✗ Service nfs-server.service is in the wrong state (not loaded).
      This systemd service conflicts with LINSTOR Gateway.
      It needs to be loaded, but not started.
      Make sure that:
      • the nfs-server package is installed
      • the nfs-server.service systemd unit is stopped and disabled
      Execute systemctl disable --now nfs-server.service to disable and stop the service.

FATA[0000] Health check failed: found 2 issues

Two items in the health check failed, but they are related to NVMe-oF and NFS. Since you are only using iSCSI in this tutorial, you can safely ignore these errors. Everything else seems to be set up correctly!

You can also verify that the LINSTOR Gateway server is now running:

# linstor-gateway iscsi list
+-----+------------+---------------+-----+---------------+
| IQN | Service IP | Service state | LUN | LINSTOR state |
+-----+------------+---------------+-----+---------------+
+-----+------------+---------------+-----+---------------+

Success! You do not have any iSCSI targets yet, but you will create one soon.

Creating an iSCSI Target

Now that all the prerequisites are set up, you can get to the exciting part. You will now create a highly available iSCSI target using LINSTOR Gateway.

Thanks to the powerful technology stack you just built, this is now almost trivial. You can do this by entering the command that follows on the controller node.

You will have to choose your own iSCSI Qualified Name (IQN) as well as a service IP. The service IP should not already be in use by another machine. It will be dynamically assigned to whichever node is currently hosting the iSCSI target so that iSCSI initiators can reach the target under a consistent address.

# linstor-gateway iscsi create iqn.2001-09.com.linbit:my-target 192.168.122.222/24 10G
Created iSCSI target 'iqn.2001-09.com.linbit:my-target'

In this example, you are creating a 10 GiB volume.

And that’s it!

Indeed, you can now see the target in the iSCSI target list:

# linstor-gateway iscsi list
+----------------------------------+--------------------+---------------+-----+---------------+
|               IQN                |     Service IP     | Service state | LUN | LINSTOR state |
+----------------------------------+--------------------+---------------+-----+---------------+
| iqn.2001-09.com.linbit:my-target | 192.168.122.222/24 | Started       |   1 | OK            |
+----------------------------------+--------------------+---------------+-----+---------------+

📝 Note: The LINSTOR state may show as Degraded for a while after creating the target. This is because DRBD is still synchronizing the data between the nodes. The target stays accessible during this process, and the LINSTOR state will change to OK on its own after a while.

You can check the synchronization progress by running linstor resource list.

You can now connect to the iSCSI target by specifying its service IP. Consuming the exported storage with an iSCSI initiator is outside the scope of this guide, but this topic is well-documented elsewhere. For example, you can refer to Red Hat’s Product Documentation or the VMware docs.

If the node that is hosting the iSCSI target fails, the service migrates to another one of the nodes.

Conclusion and Next Steps

After using LINSTOR Gateway to configure an iSCSI target, you might also be interested in configuring NVMe-oF targets or NFS exports. This guide’s general principles and steps also apply to both of those use cases.

For further reading, refer to the LINSTOR Gateway chapter of the LINSTOR User’s Guide as well as the project’s GitHub repository.

Christoph Böhmwalder

Christoph Böhmwalder

Christoph has been working as a software engineer for LINBIT since 2019. While initially a part of the DRBD team, his focus has also shifted towards higher-level components of LINBIT's software stack. Today, his primary roles consist of coordinating DRBD development within the Linux kernel, as well as developing and maintaining the LINSTOR Gateway and LINBIT VSAN products.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.

Talk to us

LINBIT is committed to protecting and respecting your privacy, and we’ll only use your personal information to administer your account and to provide the products and services you requested from us. From time to time, we would like to contact you about our products and services, as well as other content that may be of interest to you. If you consent to us contacting you for this purpose, please tick above to say how you would like us to contact you.

You can unsubscribe from these communications at any time. For more information on how to unsubscribe, our privacy practices, and how we are committed to protecting and respecting your privacy, please review our Privacy Policy.

By clicking submit below, you consent to allow LINBIT to store and process the personal information submitted above to provide you the content requested.