Diskless vs iSCSI

With DRBD 9 we introduced the concept of a client, or diskless, node. This node has no local storage itself, and simply reads and writes to the DRBD peers which do have storage via the native DRBD transports. This is a useful feature and is often used to connect nova-compute nodes to cinder storage nodes when using DRBD block storage.

It was our suspicious, that by it’s high performance nature, the DRBD transport would have less overhead than iSCSI. Thus it should yield better throughput.

For this test we used four Dell PowerEdge R710 servers. One was simply used as the DRBD client and iSCSI initiator. This system was noticably more powerful than the others as it had two Intel(R) Xeon(R) E5520 CPUs @ 2.27GHz. Whereas the remaining 3 storage nodes only had a single Intel(R) Xeon(R) E5504 CPU @ 2.00GHz. The client system also had 48 GiB where the rest only had 36 GiB of memory. All systems were connected via ConnectX-4 Infiniband cards through an Infiniband switch. The storage used were two SATA Sandisk SSD Plus drives in a raid 0 configuration.

All test systems were running Ubuntu 16.04 LTS (x86_64)

As our goal here was to measure throughput we utilized a fio workload of 1MiB blocks, 4 Threads, and an IO depth of 16. This should mean 64MiB “in flight” for the duration of the test.

Firstly we tested against a The raw LVM volume which would serve as the backing disk for the DRBD device. This was used as a baseline to demonstrate the speed of the storage. No replication took place in this test. The raw LVM clocked in at 544MiB/s. Not too bad for two SATA disk.

Then we tested the 3 node storage cluster itself. This was simply to see how much overhead was then added by the 3 way DRBD replication. This test clocked in at a healthy 521MiB/s.

Next up we wanted to see how well this 3 node storage cluster performed when served out to a client via iSCSI. These test came in at 285MiB.

When exporting to DRBD

 

 

Like? Share it with the world.

Share on facebook
Facebook
Share on twitter
Twitter
Share on linkedin
LinkedIn
Share on whatsapp
WhatsApp
Share on vk
VK
Share on reddit
Reddit
Share on email
Email