DRBD allows for setting an explicit CPU mask for its kernel threads. This is particularly beneficial for applications which would otherwise compete with DRBD for CPU cycles.
The CPU mask is a number in whose binary representation the least
significant bit represents the first CPU, the second-least significant
bit the second, and so forth. A set bit in the bitmask implies that
the corresponding CPU may be used by DRBD, whereas a cleared bit means
it must not. Thus, for example, a CPU mask of 1 (00000001
) means
DRBD may use the first CPU only. A mask of 12 (00001100
) implies
DRBD may use the third and fourth CPU.
An example CPU mask configuration for a resource may look like this:
resource <resource> { options { cpu-mask 2; ... } ... }
![]() | Important |
---|---|
Of course, in order to minimize CPU competition between DRBD and the application using it, you need to configure your application to use only those CPUs which DRBD does not use. |
Some applications may provide for this via an entry in a configuration
file, just like DRBD itself. Others include an invocation of the
taskset
command in an application init script.
When a block-based (as opposed to extent-based) filesystem is layered above DRBD, it may be beneficial to change the replication network’s maximum transmission unit (MTU) size to a value higher than the default of 1500 bytes. Colloquially, this is referred to as "enabling Jumbo frames".
![]() | Note |
---|---|
Block-based file systems include ext3, ReiserFS (version 3), and GFS. Extent-based file systems, in contrast, include XFS, Lustre and OCFS2. Extent-based file systems are expected to benefit from enabling Jumbo frames only if they hold few and large files. |
The MTU may be changed using the following commands:
ifconfig <interface> mtu <size>
or
ip link set <interface> mtu <size>
<interface> refers to the network interface used for DRBD replication. A typical value for <size> would be 9000 (bytes).
When used in conjunction with high-performance, write back enabled
hardware RAID controllers, DRBD latency may benefit greatly from using
the simple deadline
I/O scheduler, rather than the CFQ scheduler. The
latter is typically enabled by default in reasonably recent kernel
configurations (post-2.6.18 for most distributions).
Modifications to the I/O scheduler configuration may be performed via
the sysfs
virtual file system, mounted at /sys
. The scheduler
configuration is in /sys/block/<device>
, where <device> is the
backing device DRBD uses.
Enabling the deadline
scheduler works via the following command:
`echo deadline > /sys/block/<device>/queue/scheduler`
You may then also set the following values, which may provide additional latency benefits:
echo 0 > /sys/block/<device>/queue/iosched/front_merges
echo 150 > /sys/block/<device>/queue/iosched/read_expire
echo 1500 > /sys/block/<device>/queue/iosched/write_expire
If these values effect a significant latency improvement, you may want
to make them permanent so they are automatically set at system
startup. Debian and Ubuntu systems provide this functionality via the
sysfsutils
package and the /etc/sysfs.conf
configuration file.
You may also make a global I/O scheduler selection by passing the
elevator
option via your kernel command line. To do so, edit your
boot loader configuration (normally found in /boot/grub/menu.lst
if
you are using the GRUB bootloader) and add elevator=deadline
to your
list of kernel boot options.