Multi-AZ Replication
Are you looking to leverage multiple availability zones in Amazon Web Services (AWS) or Microsoft Azure? LINSTOR® can easily manage your block storage and automatically replicate your data between availability zones (AZs). This post explores multi-AZ replication using automatic placement rules in LINSTOR.
This blog post illustrates the advantages of using LINSTOR’s lesser-known auxiliary properties. With just a few simple configuration changes, we can ensure newly created resources always have their data replicas deployed across multiple availability zones (AZs).
LINSTOR Cluster Configuration
The LINSTOR cluster used in this blog will have the following characteristics:
- Five LINSTOR nodes (total) deployed within three AZs.
- LINSTOR node pairs (four total) deployed across two primary AZs: AZ-1 and AZ-2.
- The four nodes in AZ-1 and AZ-2 have all been configured with the
sp0
storage pool for replica storage. - A single LINSTOR controller node in AZ-3. This node does not contain any replica storage and will also act as a tiebreaker node for quorum purposes.
LINSTOR cluster node list:
[root@linstor-ctrl-0 ~]# linstor node list
╭────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ State ┊
╞════════════════════════════════════════════════════════════════════╡
┊ linstor-ctrl-0 ┊ COMBINED ┊ 192.168.222.250:3376 (PLAIN) ┊ Online ┊
┊ linstor-sat-0 ┊ SATELLITE ┊ 192.168.222.10:3366 (PLAIN) ┊ Online ┊
┊ linstor-sat-1 ┊ SATELLITE ┊ 192.168.222.11:3366 (PLAIN) ┊ Online ┊
┊ linstor-sat-2 ┊ SATELLITE ┊ 192.168.222.12:3366 (PLAIN) ┊ Online ┊
┊ linstor-sat-3 ┊ SATELLITE ┊ 192.168.222.13:3366 (PLAIN) ┊ Online ┊
╰────────────────────────────────────────────────────────────────────╯
Note the LINSTOR controller node is a COMBINED
type, functioning both as a controller node as well as a satellite node.
LINSTOR cluster storage pool listing:
[root@linstor-ctrl-0 ~]# linstor storage-pool list
╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ StoragePool ┊ Node ┊ Driver ┊ PoolName ┊ FreeCapacity ┊ TotalCapacity ┊ CanSnapshots ┊ State ┊ SharedName ┊
╞════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ DfltDisklessStorPool ┊ linstor-ctrl-0 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ linstor-sat-0 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ linstor-sat-1 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ linstor-sat-2 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ DfltDisklessStorPool ┊ linstor-sat-3 ┊ DISKLESS ┊ ┊ ┊ ┊ False ┊ Ok ┊ ┊
┊ sp0 ┊ linstor-sat-0 ┊ LVM ┊ drbdpool ┊ 3.74 GiB ┊ 8.00 GiB ┊ False ┊ Ok ┊ ┊
┊ sp0 ┊ linstor-sat-1 ┊ LVM ┊ drbdpool ┊ 3.74 GiB ┊ 8.00 GiB ┊ False ┊ Ok ┊ ┊
┊ sp0 ┊ linstor-sat-2 ┊ LVM ┊ drbdpool ┊ 3.74 GiB ┊ 8.00 GiB ┊ False ┊ Ok ┊ ┊
┊ sp0 ┊ linstor-sat-3 ┊ LVM ┊ drbdpool ┊ 3.74 GiB ┊ 8.00 GiB ┊ False ┊ Ok ┊ ┊
╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Spawning Resources Without Auxiliary Properties
In LINSTOR clusters with three or more satellite nodes, issuing a spawn-resources
command by default creates a new resource with two diskful replicas and one diskless tiebreaker. However, nothing guarantees or specifies where the replicas or tiebreaker are placed.
With multiple LINSTOR nodes residing in both AZ-1 and AZ-2, issuing a spawn-resources
command after the initial cluster deployment results in randomly placed replicas. Each replica may end up spread across two nodes in the same AZ. Other times, replicas are spread across separate AZs.
The amount of nodes illuminates the problem in this case. For example, if we had a three-node LINSTOR cluster deployed across three AZs, there would always be a replica in different AZs due to the node count.
Defining Auxiliary Properties for Availability Zones
We can assign an AZ to each node. We will use auxiliary properties to take the first step in configuring our cluster for improved replica assignment:
# set availability_zone auxiliary properties for each node
linstor node set-property --aux linstor-sat-0 availability_zone AZ-1
linstor node set-property --aux linstor-sat-1 availability_zone AZ-1
linstor node set-property --aux linstor-sat-2 availability_zone AZ-2
linstor node set-property --aux linstor-sat-3 availability_zone AZ-2
# quorum node
linstor node set-property --aux linstor-ctrl-0 availability_zone AZ-3
New auxiliary properties listed for each node:
[root@linstor-ctrl-0 ~]# linstor node list --show-aux-props
╭─────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ NodeType ┊ Addresses ┊ AuxProps ┊ State ┊
╞═════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ linstor-ctrl-0 ┊ COMBINED ┊ 192.168.222.250:3366 (PLAIN) ┊ Aux/availability_zone=AZ-3 ┊ Online ┊
┊ linstor-sat-0 ┊ SATELLITE ┊ 192.168.222.10:3366 (PLAIN) ┊ Aux/availability_zone=AZ-1 ┊ Online ┊
┊ linstor-sat-1 ┊ SATELLITE ┊ 192.168.222.11:3366 (PLAIN) ┊ Aux/availability_zone=AZ-1 ┊ Online ┊
┊ linstor-sat-2 ┊ SATELLITE ┊ 192.168.222.12:3366 (PLAIN) ┊ Aux/availability_zone=AZ-2 ┊ Online ┊
┊ linstor-sat-3 ┊ SATELLITE ┊ 192.168.222.13:3366 (PLAIN) ┊ Aux/availability_zone=AZ-2 ┊ Online ┊
╰─────────────────────────────────────────────────────────────────────────────────────────────────╯
Creating the Resource Group
Now that each LINSTOR node’s auxiliary properties are properly configured, we can leverage the new properties in our resource group definitions.
Create a new resource group named rg0
on top of the sp0
storage pool:
# rg0 resource group defined with two replicas across different AZs
linstor resource-group create --storage-pool sp0 --place-count 2 rg0 --replicas-on-different availability_zone
# create the corresponding volume group
linstor volume-group create rg0
Take note of the --replicas-on-different
parameter above which allows us to specify that replicas will be placed on different AZs. Existing resource groups can also be updated with new parameters if you’re using an already configured LINSTOR cluster.
The new configuration for rg0
:
[root@linstor-ctrl-0 ~]# linstor resource-group list --resource-groups rg0
╭───────────────────────────────────────────────────────────────────────────────────────╮
┊ ResourceGroup ┊ SelectFilter ┊ VlmNrs ┊ Description ┊
╞═══════════════════════════════════════════════════════════════════════════════════════╡
┊ rg0 ┊ PlaceCount: 2 ┊ 0 ┊ ┊
┊ ┊ StoragePool(s): sp0 ┊ ┊ ┊
┊ ┊ ReplicasOnDifferent: ['Aux/availability_zone'] ┊ ┊ ┊
╰───────────────────────────────────────────────────────────────────────────────────────╯
Spawning Resources
Now that our configuration is finalized, we can create new resources in rg0
using spawn-resources
:
# spawn various resources intended for use with MySQL databases
linstor resource-group spawn-resources rg0 mysql_001 256M
linstor resource-group spawn-resources rg0 mysql_002 256M
After creating these new resources, lets see how they’re distributed throughout our LINSTOR cluster:
[root@linstor-ctrl-0 ~]# linstor volume list
╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
┊ Node ┊ Resource ┊ StoragePool ┊ VolNr ┊ MinorNr ┊ DeviceName ┊ Allocated ┊ InUse ┊ State ┊
╞═══════════════════════════════════════════════════════════════════════════════════════════════════════════════════════╡
┊ linstor-ctrl-0 ┊ mysql_001 ┊ DfltDisklessStorPool ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ ┊ Unused ┊ TieBreaker ┊
┊ linstor-sat-0 ┊ mysql_001 ┊ sp0 ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ 260 MiB ┊ Unused ┊ UpToDate ┊
┊ linstor-sat-2 ┊ mysql_001 ┊ sp0 ┊ 0 ┊ 1000 ┊ /dev/drbd1000 ┊ 260 MiB ┊ Unused ┊ UpToDate ┊
┊ linstor-ctrl-0 ┊ mysql_002 ┊ DfltDisklessStorPool ┊ 0 ┊ 1001 ┊ /dev/drbd1001 ┊ ┊ Unused ┊ TieBreaker ┊
┊ linstor-sat-1 ┊ mysql_002 ┊ sp0 ┊ 0 ┊ 1001 ┊ /dev/drbd1001 ┊ 260 MiB ┊ Unused ┊ UpToDate ┊
┊ linstor-sat-3 ┊ mysql_002 ┊ sp0 ┊ 0 ┊ 1001 ┊ /dev/drbd1001 ┊ 260 MiB ┊ Unused ┊ UpToDate ┊
╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯
Looking through the above list of newly created DRBD® volumes in the cluster, we can see that the mysql_001
resource has been created with replicas on linstor-sat-0
(AZ-1), as well as linstor-sat-2
(AZ-2). Also, the mysql_002
resource has been created with replicas on linstor-sat-1
(AZ-1), as well as linstor-sat-3
(AZ-2). Both resources were created with their respective replicas placed inside different AZs (AZ-1 and AZ-2) as specified in the resource group.
With the new configuration changes, LINSTOR resources are now effortlessly spawned with their data replicas residing in different AZs.
Conclusion
Replicating between multiple regions is also possible by extending LINSTOR with our closed source DRBD Proxy solution. Due to the higher latency between regions, DRBD Proxy becomes a necessary add-on, enabling asynchronous replication between regions without impacting performance. LINSTOR, DRBD, and DRBD Proxy are all components of LINBIT®’s Software Defined Storage (SDS) solution.
To learn more about how LINSTOR can simplify your block storage management and data replication requirements in a Multi-AZ or Multi-Region deployment, reach out to us and request a support evaluation today.