Download PDF »
By using the Pacemaker 1.1.15 release, the LINBIT cluster stack has been cutting-edge. Now the distributions are catching up.
As an update to the previous post, we now have the Tech Guide for RDMA performance with non-volatile storage available online. Just head over to the LINBIT Tech Guide area and read the HGST Ultrastar SN150 NVMe performance report! (Free registration required.)
As promised in the previous RDMA post, we gathered some performance data for the RDMA transport. Read and enjoy!
As a follow-up to the previous post, here are some more technical details about DRBD 9 transports. RDMA can use multiple paths at the same time – as in High Availability *and* bandwidth aggregation. The planned SCTP will provide multiple links for HA, too TCP can’t either.
With the recent kernel upgrade to 2.6.32-279.11.1 there’s a new 32bit incompatibility that leads to a kernel PANIC
This post shows performance numbers for a pair of Dell PowerEdge R720 (0VWT90), with 24 logical CPUs (E5-2640 0 @ 2.50GHz) and storage on a Fusion-io ioDrive2 (1.2TB). Typical values without DRBD (repeated a few times, nearly no variation): # Reading: # dd if=/dev/fioa of=/dev/zero bs=16M count=1k iflag=direct 1024+0 records in 1024+0 records out 17179869184 […]
DRBD9 has a new transport abstraction layer and it is designed for speed; apart from SSOCKS and TCP the next generation link will be RDMA. So, what is RDMA, and how is it different from TCP?
We often see people on #drbd or on drbd-user trying to measure the performance of their setup. Here are a few best practices to do this.