This blog post will introduce a new feature and examine the performance of DRBD® Diskless. DRBD 9 introduces an interesting new feature; DRBD client nodes. With this feature, you can have a DRBD node which has no local replica itself. Essentially, the client node simply reads and writes to the storage of the peer systems using the DRBD connections.
As DRBD’s communication is designed to operate with minimal overhead we thought it might be interesting to see how it compares to another popular solution for network-attached block storage; iSCSI. For both tests, the backend storage is a Highly Available DRBD cluster with two synchronous replicas of the data.
For this test, we utilized KVM virtual machines. We used a 1GiB RAM disk as the backing storage for the DRBD cluster. Replication and client connections were all done via the same Emulex OneConnect 10Gb interface. We measured the performance of the DRBD (Diskless) storage to have a throughput of 98.2MiB/s while running at 25160 IOPs. Not too bad for a purely random workload.
fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/drbd100 \
--name=test0 --group_reporting
Please note that there was no additional tuning done with regard to the network or storage stack. There was no additional tuning for iSCSI or DRBD either. For this test, everything remained default per CentOS Linux 7.2.1511. The DRBD version used was 9.0.3-1 with the 0.96.1 version of DRBDmanage.
We performed only three short benchmarks. The first of which was a simple throughput test via dd.
dd if=/dev/zero of=/dev/drbd100 bs=1M count=1024 oflag=direct
When writing 1MiB Blocks sequentially we see quite an improvement with DRBD. Pretty much on par with what the back-end storage cluster can handle.
Random read IO was the next test. For these tests, we leveraged the Flexible IO tester.
fio --readwrite=randread --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/sdb --name=test0 \
--group_reporting
DRBD Diskless Shows Substantial Performance Improvements
Next, we performed the same test, but with a random write workload
fio --readwrite=randwrite --runtime=30s --blocksize=4k --ioengine=libaio \
--numjobs=16 --iodepth=16 --direct=1 --filename=/dev/sdb --name=test0 \
--group_reporting
Again, the reduced overhead of the DRBD native communications is very much apparent.
While the DRBD transport seems to perform better in these tests. There are still some use cases where one might want to choose iSCSI, although, you might want to at least consider using a diskless DRBD client instead.