DRBD’s read-balancing feature has been available since version 8.4.1. This feature enables DRBD® to balance read requests between the Primary and Secondary nodes. You configure read-balancing in the “disk” section of the configuration file(s).
On the read intensive workloads, balancing read I/O operations between nodes can give you significant improvements. If your network gear allows, you can simply multiply the performance.
While writes occur on both sides of the cluster, by default the reads are served locally, that is, the default read-balancing policy is
prefer-local. This might not be optimal if you’ve got a big pipe to the other node and a heavily loaded I/O subsystem.
Read-balancing with DRBD has several policy options to choose from:
- Read striping,
1M-striping, chooses the node to read from through the block address. This is a simple, static load-balancing. For example, for
512K-stripingthe first half of each MiByte would be read from one machine, and the second half from the other .
round-robinpolicy just passes read requests to alternating nodes. This might go wrong if your application reads 4kiB, 1MiB, 4kiB, 1MiB, and so on — but this is fairly unlikely.
least-pendingpolicy chooses the node with the smallest number of open requests.
when-congested-remotepolicy uses the remote node if there are local requests, using the Linux kernel variable,
bdi_read_congested, to decide .
prefer-remotepolicy is implemented for completeness, however as of this writing there is no viable use case.
Please note that all this is still below the file system layer — so even if the Secondary node is used for reading, this won’t speed up a failover, as the pages read are not kept anywhere.
- Please note that the distinction is done through the start address of the request, and no request splitting is done. So if your file system and application work with a block size of 1MiByte, every setting lower than
1M-stripingwould use the same node.