DRBD Client (diskless mode) Performance

DRBD 9 introduces an interesting new feature; DRBD client nodes. With this feature you can have a DRBD node which has no local replica itself. Essentially, the client node simply reads and writes to the storage of the peer systems using the DRBD connections.

As DRBD’s communication is designed to operate with minimal overhead I thought it might be interesting to see how it compares to another popular solution for network attached block storage; iSCSI. For both test the backend storage is a Highly Available DRBD cluster with two synchronous replicas of the data.

For this test we utilized KVM virtual machines. A 1GiB RAM disk was used as the backing storage for the DRBD cluster. Replication and client connections were all done via the same Emulex OneConnect 10Gb interface. The performance of the DRBD storage was measured to have a throughput of 98.2MiB/s while running at 25160 IOPs. Not too bad for a purely random workload.[1. fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/drbd100 --name=t
est0 --group_reporting
]

Please note that there was no additional tuning done with regards to the network or storage stack. No additional tuning was done for iSCSI or DRBD either. For this test everything was left as default per CentOS Linux 7.2.1511. The DRBD version used was 9.0.3-1 with the 0.96.1 version of DRBDmanage.

Only three short benchmarks were performed. The first of which was a simple throughput test via dd. [1. dd if=/dev/zero of=/dev/drbd100 bs=1M count=1024 oflag=direct].

Sequential Write Throughput V2

When writing 1MiB Blocks sequentially we see quite the improvement with DRBD. Pretty much on par with what the back end storage cluster can handle.

The next test was random read IO. For these test we leveraged the Flexible IO tester.[1. fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/sdb --name=t
est0 --group_reporting
].

Random Reads Throughput v2

Random Read IOPS v2

Again, we see substantial performance improvements when using a DRBD diskless client.

Next we performed the same test, but with a random write workload [1. fio –readwrite=randwrite –runtime=30s –blocksize=4k –ioengine=libaio –numjobs=16 –iodepth=16 –direct=1 –filename=/dev/sdb –name=
test0 –group_reporting].

Random Writes Throughput v2

Random Write IOPS v2

Again, the reduced overhead of the DRBD native communications is very much apparent.

While the DRBD transport seems to perform better in these test. There are still some use cases where one might want to choose iSCSI, although, you might want to at least consider using a diskless DRBD client instead.

Like? Share it with the world.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp
Share on vk
VK
Share on reddit
Reddit
Share on email
Email