RDMA Performance with Real Storage

LINBIT featured image

As an update to the previous post, we now have the Tech Guide for RDMA performance with non-volatile storage available online. Just head over to the LINBIT Tech Guide area and read the HGST Ultrastar SN150 NVMe performance report! (Free registration required.)

RDMA Performance

LINBIT featured image

As promised in the previous RDMA post, we gathered some performance data for the RDMA transport. Read and enjoy!

RDMA paths

As a follow-up to the previous post, here are some more technical details about DRBD 9 transports. RDMA can use multiple paths at the same time – as in High Availability *and* bandwidth aggregation. The planned SCTP will provide multiple links for HA, too TCP can’t either.

Performance numbers for DRBD

This post shows performance numbers for a pair of Dell PowerEdge R720 (0VWT90), with 24 logical CPUs (E5-2640 0 @ 2.50GHz) and storage on a Fusion-io ioDrive2 (1.2TB). Typical values without DRBD (repeated a few times, nearly no variation): # Reading: # dd if=/dev/fioa of=/dev/zero bs=16M count=1k iflag=direct 1024+0 records in 1024+0 records out 17179869184 […]

Pacemaker and dampen interval

LINBIT featured image

http://lists.linux-ha.org/pipermail/linux-ha/2011-May/043010.html http://www.gossamer-threads.com/lists/linuxha/dev/75722#75722

What is RDMA, and why should we care?

LINBIT featured image

DRBD9 has a new transport abstraction layer and it is designed for speed; apart from SSOCKS and TCP the next generation link will be RDMA. So, what is RDMA, and how is it different from TCP?

Benchmarking DRBD

LINBIT featured image

We often see people on #drbd or on drbd-user trying to measure the performance of their setup. Here are a few best practices to do this.