Mounting raid on linux

Содержание
  1. Программный raid в Linux
  2. Что такое RAID
  3. Создание программного RAID в Linux
  4. Шаг 1. Установка mdadm
  5. Шаг 2. Подготовка дисков
  6. Шаг 3. Создание RAID 0
  7. Шаг 4. Тестирование RAID 0
  8. Шаг 5. Информация о RAID
  9. Шаг 6. Сохранение RAID массива
  10. Шаг 7. Переименование RAID массива
  11. Шаг 8. Удаление RAID массива
  12. Шаг 10. Создание массива RAID 1
  13. Выводы
  14. Mounting raid on linux
  15. Contents
  16. RAID levels
  17. Standard RAID levels
  18. Nested RAID levels
  19. RAID level comparison
  20. Implementation
  21. Which type of RAID do I have?
  22. Installation
  23. Prepare the devices
  24. Partition the devices
  25. GUID Partition Table
  26. Master Boot Record
  27. Build the array
  28. Update configuration file
  29. Assemble the array
  30. Format the RAID filesystem
  31. Calculating the stride and stripe width
  32. Mounting from a Live CD
  33. Installing Arch Linux on RAID
  34. Update configuration file
  35. Configure mkinitcpio
  36. Configure the boot loader
  37. Root device
  38. RAID0 layout
  39. RAID Maintenance
  40. Scrubbing
  41. General notes on scrubbing
  42. RAID1 and RAID10 notes on scrubbing
  43. Removing devices from an array
  44. Adding a new device to an array
  45. Increasing size of a RAID volume
  46. Change sync speed limits
  47. RAID5 performance
  48. Update RAID superblock
  49. Monitoring
  50. Watch mdstat
  51. Track IO with iotop
  52. Track IO with iostat
  53. Email notifications
  54. Troubleshooting
  55. Error: «kernel: ataX.00: revalidation failed»
  56. Start arrays read-only
  57. Recovering from a broken or missing drive in the raid
  58. Benchmarking

Программный raid в Linux

В высоконагруженных производственных серверах жесткие диски и SSD подключены не по отдельности, а объединены в специальные массивы, внутри которых данные физически хранятся на нескольких дисках одновременно, что обеспечивает лучшую сохранность данных при выходе дисков из строя, а также увеличивает скорость записи, так как данные можно записывать не на один диск, а сразу на несколько обходя ограничение скорости каждого диска. Такие массивы называются RAID.

Для создания RAID массивов используются аппаратные специальные контроллеры. Однако создать RAID массив можно и без такого контролера. Такие массивы называются программными. В этой статье мы рассмотрим как создать программный RAID Linux.

Что такое RAID

Аббревиатура RAID расшифровывается как Redundant Array of Inexpensive Disks. С помощью этой технологии вы можете превратить несколько реальных жестких дисков в один виртуальный диск с увеличенным объемом и скоростью передачи данных. Но объем полученного диска и параметры его работы уже зависят от выбранного режима работы RAID. Доступны такие режимы:

  • RAID 0 — позволяет увеличить скорость записи. Все диски массива будут использоваться для записи данных, поэтому их скорости будут складываться. Например, если у вас есть три диска объемом 512 Гб и скоростью записи 200 Мб в секунду, то объединив их в RAID 0 вы получите виртуальный диск объемом 1,5 Тб и максимальной скоростью записи 600 Мб в секунду.
  • RAID 1 — этот режим увеличивает сохранность данных. Во время записи одни и те же данные пишутся параллельно на подключённые диски. В результате у вас будет несколько копий одних и тех же данных. Если один из дисков массива выйдет из строя, система продолжит работать, так как данные есть ещё и на другом диске. Например, если вы объедините два диска по 1 Тб в RAID 1, то получите один виртуальный диск с объемом 1 Тб.
  • RAID 10 — объединяет в себе два предыдущих варианта. Он может состоять как минимум из четырех дисков. В таком случае сначала создаются два массива RAID 1, а поверх них создается массив RAID 0 для увеличения производительности.

Конечно, существуют и другие режимы работы, но эти самые популярные. В сегодняшней статье мы рассмотрим как создать программный RAID уровней 0 и 1.

Создание программного RAID в Linux

Шаг 1. Установка mdadm

Для управления программными RAID массивами в Linux используется утилита mdadm. Для того чтобы установить её в Ubuntu или Debian выполните такую команду:

sudo apt install mdadm

Для установки утилиты в CentOS/Fedora/RedHat необходимо выполнить:

sudo yum install mdadm

Шаг 2. Подготовка дисков

Посмотреть список дисков, подключённых к системе можно с помощью команды lsblk:

В этой статье я покажу как объединить три диска в RAID на примере дисков /dev/sda, /dev/sdb и /dev/sdc. Сначала необходимо определиться стоит ли размещать RAID непосредственно на диски или на разделы. Лучше выбрать разделы, так как это дает больше гибкости и безопасности. Во первых, операционная система может перезаписать суперблок RAID если он размещён прямо на диске. Во вторых, если вы выделяете весь диск под RAID, то у вас могут возникнуть проблемы при замене диска. Диски одинакового объема, обычно, немного отличаются у разных производителей. Поэтому для замены вам придется искать точно такой же диск с точно таким же реальным объемом. Если же у вас будет раздел, вы просто сможете создать раздел нужного объема.

Сначала нужно создать таблицу разделов на всех выбранных дисках:

sudo parted /dev/sda mklabel msdos
sudo parted /dev/sdb mklabel msdos
sudo parted /dev/sdc mklabel msdos

Если на диске уже существует таблица разделов программа предупредит о том, что создание новой сотрёт все данные с диска. После создания таблицы разделов следует создать по разделу на каждом диске. Например, создадим разделы размером 460 гигабайт. Для этого можно воспользоваться той же командой parted:

sudo parted /dev/sda mkpart primary ext4 2048 460Gb
sudo parted /dev/sdb mkpart primary ext4 2048 460Gb
sudo parted /dev/sdc mkpart primary ext4 2048 460Gb

Теперь диски готовы к размещению на них RAID:

Шаг 3. Создание RAID 0

Для создания RAID массива надо выполнить команду mdadm с опцией —create, указать режим работы массива, количество дисков и сами диски. Синтаксис команды такой:

$ sudo mdadm —create /dev/имя_массива —level= режим_работы —raid-devices= количество_устройств список устройств

sudo mdadm —create /dev/md0 —level=0 —raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

После выполнения этой команды вы увидите раздел raid в lsblk. С этим разделом можно работать как с любым обычным разделом в вашей системе.

Шаг 4. Тестирование RAID 0

Давайте для примера отформатируем полученный раздел в файловую систему Ext4, смонтируем и попробуем записывать туда файлы:

sudo mkfs -t ext4 /dev/md0
sudo mount /dev/md0 /mnt

Затем можно тестировать скорость с помощью dd:

sudo dd if=/dev/zero of=/mnt/file bs=1G count=5

Как видите, при записи 5 Гб данных мы получаем скорость 400 Мб/сек, это уже на уровне обычного SSD.

Шаг 5. Информация о RAID

Найти информацию обо всех созданных в системе RAID массивах вы можете в файле /proc/mdstat:

Именно так можно посмотреть RAID Linux. Посмотреть более детальную информацию о массиве /dev/md0 можно с помощью самой утилиты mdadm:

sudo mdadm —detail /dev/md0

Здесь в том числе отображается состояние RAID Linux. Посмотреть детальную информацию о каждом устройстве, которое входит в RAID можно с помощью опции —examine:

sudo mdadm —examine /dev/sda1 /dev/sdb1 /dev/sdc1

Шаг 6. Сохранение RAID массива

В принципе, уже сейчас RAID массив работает и продолжит работать после перезагрузки, потому что mdadm просканирует все диски, найдёт метаданные массива и построит его. Но неизвестно какое имя программа присвоит полученному массиву и неизвестно все ли параметры будут восстановлены верно. Поэтому конфигурацию массива лучше сохранить. Для этого используйте такую команду:

sudo mdadm —detail —scan —verbose | sudo tee -a /etc/mdadm/mdadm.conf

Затем нужно пересоздать initramfs с поддержкой этого массива:

sudo update-initramfs -u

С полученным массивом можно обращаться как с обычным разделом диска. Например, для того чтобы автоматически монтировать его в систему добавьте такую строчку в /etc/fstab:

sudo vi /etc/fstab

/dev/md0 /mnt/ ext4 defaults 0 0

На этом создание raid массива linux завершено.

Шаг 7. Переименование RAID массива

Если вы не выполните предыдущий пункт и перезагрузите компьютер, то можете получить RAID массив с именем md127 вместо md0, такое имя также может быть присвоено второму RAID массиву. Для того чтобы переименовать массив, его придется пересобрать. Для этого сначала остановите существующий массив:

sudo mdadm —stop /dev/md127

Затем выполните команду переименования. Синтаксис у неё такой:

$ sudo mdadm —assemble —update= name —name =номер /dev/md_номер список устройств

sudo mdadm —assemble —update=name —name=0 /dev/md0 /dev/sda1 /dev/sdb1 /dev/sdc1

После этого следует повторить предыдущий шаг для уже правильного сохранения RAID устройства.

Шаг 8. Удаление RAID массива

Если вы не хотите чтобы ваши диски и дальше были объединены в RAID, его можно удалить. Для этого выполните такую команду:

sudo mdadm —remove /dev/md0

Читайте также:  Антивирус для линукс sophos

Она удалит все метаданные с дисков /dev/md0. Дальше останется только удалить или закомментировать секцию данного RAID массива в /etc/mdadm/mdadm.conf

sudo vi /etc/mdadm/mdadm.conf

Шаг 10. Создание массива RAID 1

Теперь вы знаете как создать raid linux на примере RAID 0. Давайте ещё разберемся с RAID 1. Для создания RAID 1 используется такая же команда как и для RAID 0, но указывается другой уровень работы массива:

sudo mdadm —create /dev/md0 —level=1 —raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

Затем вы можете убедится что RAID создан посмотрев информацию о нём:

sudo mdadm —detail /dev/md0

Ну и с помощью lsblk можно оценить размер устройства:

Как и ожидалось, размер не увеличился, поскольку копии данных будут записываться на все три диска. Теперь давайте посмотрим на скорость:

Скорость записи данных такая же как и у одного диска. Это цена сохранности данных. Если вы отключите один из дисков, то все данные всё равно будут вам доступны.

Выводы

В этой статье мы рассмотрели как создать программный RAID в Linux. Как видите здесь нет ничего очень сложного. Может и программный RAID не такой производительный, как аппаратный, зато полностью решает задачи объединения дисков.

Источник

Mounting raid on linux

Redundant Array of Independent Disks (RAID) is a storage technology that combines multiple disk drive components (typically disk drives or partitions thereof) into a logical unit. Depending on the RAID implementation, this logical unit can be a file system or an additional transparent layer that can hold several partitions. Data is distributed across the drives in one of several ways called #RAID levels, depending on the level of redundancy and performance required. The RAID level chosen can thus prevent data loss in the event of a hard disk failure, increase performance or be a combination of both.

This article explains how to create/manage a software RAID array using mdadm.

Contents

RAID levels

Despite redundancy implied by most RAID levels, RAID does not guarantee that data is safe. A RAID will not protect data if there is a fire, the computer is stolen or multiple hard drives fail at once. Furthermore, installing a system with RAID is a complex process that may destroy data.

Standard RAID levels

There are many different levels of RAID; listed below are the most common.

RAID 0 Uses striping to combine disks. Even though it does not provide redundancy, it is still considered RAID. It does, however, provide a big speed benefit. If the speed increase is worth the possibility of data loss (for swap partition for example), choose this RAID level. On a server, RAID 1 and RAID 5 arrays are more appropriate. The size of a RAID 0 array block device is the size of the smallest component partition times the number of component partitions. RAID 1 The most straightforward RAID level: straight mirroring. As with other RAID levels, it only makes sense if the partitions are on different physical disk drives. If one of those drives fails, the block device provided by the RAID array will continue to function as normal. The example will be using RAID 1 for everything except swap and temporary data. Please note that with a software implementation, the RAID 1 level is the only option for the boot partition, because bootloaders reading the boot partition do not understand RAID, but a RAID 1 component partition can be read as a normal partition. The size of a RAID 1 array block device is the size of the smallest component partition. RAID 5 Requires 3 or more physical drives, and provides the redundancy of RAID 1 combined with the speed and size benefits of RAID 0. RAID 5 uses striping, like RAID 0, but also stores parity blocks distributed across each member disk. In the event of a failed disk, these parity blocks are used to reconstruct the data on a replacement disk. RAID 5 can withstand the loss of one member disk.

Nested RAID levels

RAID level comparison

RAID level Data redundancy Physical drive utilization Read performance Write performance Min drives
0 No 100% nX

2 1 Yes 50% Up to nX if multiple processes are reading, otherwise 1X 1X 2 5 Yes 67% — 94% (n−1)X

3 6 Yes 50% — 88% (n−2)X (n−2)X 4 10,far2 Yes 50% nX

Best; on par with RAID0 but redundant

(n/2)X 2 10,near2 Yes 50% Up to nX if multiple processes are reading, otherwise 1X (n/2)X 2

* Where n is standing for the number of dedicated disks.

Implementation

The RAID devices can be managed in different ways:

Software RAID This is the easiest implementation as it does not rely on obscure proprietary firmware and software to be used. The array is managed by the operating system either by:

    by an abstraction layer (e.g. mdadm);

Which type of RAID do I have?

Since software RAID is implemented by the user, the type of RAID is easily known to the user.

However, discerning between FakeRAID and true hardware RAID can be more difficult. As stated, manufacturers often incorrectly distinguish these two types of RAID and false advertising is always possible. The best solution in this instance is to run the lspci command and looking through the output to find the RAID controller. Then do a search to see what information can be located about the RAID controller. Hardware RAID controllers appear in this list, but FakeRAID implementations do not. Also, true hardware RAID controller are often rather expensive, so if someone customized the system, then it is very likely that choosing a hardware RAID setup made a very noticeable change in the computer’s price.

Installation

Install mdadm . mdadm is used for administering pure software RAID using plain block devices: the underlying hardware does not provide any RAID logic, just a supply of disks. mdadm will work with any collection of block devices. Even if unusual. For example, one can thus make a RAID array from a collection of thumb drives.

Prepare the devices

If the device is being reused or re-purposed from an existing array, erase any old RAID configuration information:

or if a particular partition on a drive is to be deleted:

Partition the devices

It is highly recommended to partition the disks to be used in the array. Since most RAID users are selecting disk drives larger than 2 TiB, GPT is required and recommended. See Partitioning for more information on partitioning and the available partitioning tools.

GUID Partition Table

  • After creating the partitions, their partition type GUIDs should be A19D880F-05FC-4D3B-A006-743F0F84911E (it can be assigned by selecting partition type Linux RAID in fdisk or FD00 in gdisk).
  • If a larger disk array is employed, consider assigning filesystem labels or partition labels to make it easier to identify an individual disk later.
  • Creating partitions that are of the same size on each of the devices is recommended.

Master Boot Record

For those creating partitions on HDDs with a MBR partition table, the partition types IDs available for use are:

  • 0xDA for non-FS data ( Non-FS data in fdisk). This is the recommended mdadm partition type for RAID arrays on Arch Linux.
  • 0xFD for RAID autodetect arrays ( Linux RAID autodetect in fdisk). This partition type should only be used if RAID autodetection is desireable (non-initramfs system, old mdadm metadata format).

Build the array

Use mdadm to build the array. See mdadm(8) for supported options. Several examples are given below.

The following example shows building a 2-device RAID1 array:

The following example shows building a RAID5 array with 4 active devices and 1 spare device:

The following example shows building a RAID10,far2 array with 2 devices:

The array is created under the virtual device /dev/mdX , assembled and ready to use (in degraded mode). One can directly start using it while mdadm resyncs the array in the background. It can take a long time to restore parity. Check the progress with:

Update configuration file

By default, most of mdadm.conf is commented out, and it contains just the following:

This directive tells mdadm to examine the devices referenced by /proc/partitions and assemble as many arrays as possible. This is fine if you really do want to start all available arrays and are confident that no unexpected superblocks will be found (such as after installing a new storage device). A more precise approach is to explicitly add the arrays to /etc/mdadm.conf :

This results in something like the following:

This also causes mdadm to examine the devices referenced by /proc/partitions . However, only devices that have superblocks with a UUID of 27664… are assembled in to active arrays.

See mdadm.conf(5) for more information.

Assemble the array

Once the configuration file has been updated the array can be assembled using mdadm:

Format the RAID filesystem

The array can now be formatted with a file system like any other partition, just keep in mind that:

  • Due to the large volume size not all filesystems are suited (see: Wikipedia:Comparison of file systems#Limits).
  • The filesystem should support growing and shrinking while online (see: Wikipedia:Comparison of file systems#Features).
  • One should calculate the correct stride and stripe-width for optimal performance.

Calculating the stride and stripe width

Two parameters are required to optimise the filesystem structure to fit optimally within the underlying RAID structure: the stride and stripe width. These are derived from the RAID chunk size, the filesystem block size, and the number of «data disks».

The chunk size is a property of the RAID array, decided at the time of its creation. mdadm ‘s current default is 512 KiB. It can be found with mdadm :

The block size is a property of the filesystem, decided at its creation. The default for many filesystems, including ext4, is 4 KiB. See /etc/mke2fs.conf for details on ext4.

The number of «data disks» is the minimum number of devices in the array required to completely rebuild it without data loss. For example, this is N for a raid0 array of N devices and N-1 for raid5.

Once you have these three quantities, the stride and the stripe width can be calculated using the following formulas:

Example 1. RAID0

Example formatting to ext4 with the correct stripe width and stride:

  • Hypothetical RAID0 array is composed of 2 physical disks.
  • Chunk size is 512 KiB.
  • Block size is 4 KiB.

stride = chunk size / block size. In this example, the math is 512/4 so the stride = 128.

stripe width = # of physical data disks * stride. In this example, the math is 2*128 so the stripe width = 256.

Example 2. RAID5

Example formatting to ext4 with the correct stripe width and stride:

  • Hypothetical RAID5 array is composed of 4 physical disks; 3 data discs and 1 parity disc.
  • Chunk size is 512 KiB.
  • Block size is 4 KiB.

stride = chunk size / block size. In this example, the math is 512/4 so the stride = 128.

stripe width = # of physical data disks * stride. In this example, the math is 3*128 so the stripe width = 384.

For more on stride and stripe width, see: RAID Math.

Example 3. RAID10,far2

Example formatting to ext4 with the correct stripe width and stride:

  • Hypothetical RAID10 array is composed of 2 physical disks. Because of the properties of RAID10 in far2 layout, both count as data disks.
  • Chunk size is 512 KiB.
  • Block size is 4 KiB.

stride = chunk size / block size. In this example, the math is 512/4 so the stride = 128.

stripe width = # of physical data disks * stride. In this example, the math is 2*128 so the stripe width = 256.

Mounting from a Live CD

Users wanting to mount the RAID partition from a Live CD, use:

If your RAID 1 that is missing a disk array was wrongly auto-detected as RAID 1 (as per mdadm —detail /dev/mdnumber ) and reported as inactive (as per cat /proc/mdstat ), stop the array first:

Installing Arch Linux on RAID

You should create the RAID array between the Partitioning and formatting steps of the Installation Procedure. Instead of directly formatting a partition to be your root file system, it will be created on a RAID array. Follow the section #Installation to create the RAID array. Then continue with the installation procedure until the pacstrap step is completed. When using UEFI boot, also read EFI system partition#ESP on software RAID1.

Update configuration file

After the base system is installed the default configuration file, mdadm.conf , must be updated like so:

Always check the mdadm.conf configuration file using a text editor after running this command to ensure that its contents look reasonable.

Continue with the installation procedure until you reach the step Installation guide#Initramfs, then follow the next section.

Configure mkinitcpio

Add mdadm_udev to the HOOKS section of the mkinitcpio.conf to add support for mdadm into the initramfs image:

If you use the mdadm_udev hook with a FakeRAID array, it is recommended to include mdmon in the BINARIES array:

Configure the boot loader

Root device

Point the root parameter to the mapped device. E.g.:

If booting from a software raid partition fails using the kernel device node method above, an alternative way is to use one of the methods from Persistent block device naming, for example:

RAID0 layout

Since version 5.3.4 of the Linux kernel, you need to explicitly tell the kernel which RAID0 layout should be used: RAID0_ORIG_LAYOUT ( 1 ) or RAID0_ALT_MULTIZONE_LAYOUT ( 2 ).[1] You can do this by providing the kernel parameter as follows:

The correct value depends upon the kernel version that was used to create the raid array: use 1 if created using kernel 3.14 or earlier, use 2 if using a more recent version of the kernel. One way to check this is to look at the creation time of the raid array:

Here we can see that this raid array was created on September 24, 2015. The release date of Linux Kernel 3.14 was March 30, 2014, and as such this raid array is most likely created using a multizone layout ( 2 ).

RAID Maintenance

Scrubbing

It is good practice to regularly run data scrubbing to check for and fix errors. Depending on the size/configuration of the array, a scrub may take multiple hours to complete.

To initiate a data scrub:

The check operation scans the drives for bad sectors and automatically repairs them. If it finds good sectors that contain bad data (the data in a sector does not agree with what the data from another disk indicates that it should be, for example the parity block + the other data blocks would cause us to think that this data block is incorrect), then no action is taken, but the event is logged (see below). This «do nothing» allows admins to inspect the data in the sector and the data that would be produced by rebuilding the sectors from redundant information and pick the correct data to keep.

As with many tasks/items relating to mdadm, the status of the scrub can be queried by reading /proc/mdstat .

To stop a currently running data scrub safely:

When the scrub is complete, admins may check how many blocks (if any) have been flagged as bad:

General notes on scrubbing

It is a good idea to set up a cron job as root to schedule a periodic scrub. See raid-check AUR which can assist with this. To perform a periodic scrub using systemd timers instead of cron, See raid-check-systemd AUR which contains the same script along with associated systemd timer unit files.

RAID1 and RAID10 notes on scrubbing

Due to the fact that RAID1 and RAID10 writes in the kernel are unbuffered, an array can have non-0 mismatch counts even when the array is healthy. These non-0 counts will only exist in transient data areas where they do not pose a problem. However, we cannot tell the difference between a non-0 count that is just in transient data or a non-0 count that signifies a real problem. This fact is a source of false positives for RAID1 and RAID10 arrays. It is however still recommended to scrub regularly in order to catch and correct any bad sectors that might be present in the devices.

Removing devices from an array

One can remove a device from the array after marking it as faulty:

Now remove it from the array:

Remove device permanently (for example, to use it individually from now on): Issue the two commands described above then:

Stop using an array:

  1. Umount target array
  2. Stop the array with: mdadm —stop /dev/md0
  3. Repeat the three command described in the beginning of this section on each device.
  4. Remove the corresponding line from /etc/mdadm.conf .

Adding a new device to an array

Adding new devices with mdadm can be done on a running system with the devices mounted. Partition the new device using the same layout as one of those already in the arrays as discussed above.

Assemble the RAID array if it is not already assembled:

Add the new device to the array:

This should not take long for mdadm to do.

Depending on the type of RAID (for example, with RAID1), mdadm may add the device as a spare without syncing data to it. You can increase the number of disks the RAID uses by using —grow with the —raid-devices option. For example, to increase an array to four disks:

You can check the progress with:

Check that the device has been added with the command:

This is because the above commands will add the new disk as a «spare» but RAID0 does not have spares. If you want to add a device to a RAID0 array, you need to «grow» and «add» in the same command, as demonstrated below:

Increasing size of a RAID volume

If larger disks are installed in a RAID array or partition size has been increased, it may be desirable to increase the size of the RAID volume to fill the larger available space. This process may be begun by first following the above sections pertaining to replacing disks. Once the RAID volume has been rebuilt onto the larger disks it must be «grown» to fill the space.

Next, partitions present on the RAID volume /dev/md0 may need to be resized. See Partitioning for details. Finally, the filesystem on the above mentioned partition will need to be resized. If partitioning was performed with gparted this will be done automatically. If other tools were used, unmount and then resize the filesystem manually.

Change sync speed limits

Syncing can take a while. If the machine is not needed for other tasks the speed limit can be increased.

In the above example, it would seem the max speed is limited to approximately 238 M/sec.

Check the current speed limit:

Set a new maximum speed of raid resyncing operations using sysctl:

Then check out the syncing speed and estimated finish time.

RAID5 performance

To improve RAID5 performance for fast storage (e.g. NVMe), increase /sys/block/mdx/md/group_thread_cnt to more threads. For example, to use 8 threads to operate on a RAID5 device:

Update RAID superblock

To update the RAID superblock, you need to first unmount the array and then stop the array with the following command:

Then you can update certain parameters by reassembling the array. For example, to update the homehost :

See the arguments of —update for details.

Monitoring

A simple one-liner that prints out the status of the RAID devices:

Watch mdstat

Or preferably using tmux

Track IO with iotop

The iotop package displays the input/output stats for processes. Use this command to view the IO for raid threads.

Track IO with iostat

The iostat utility from sysstat package displays the input/output statistics for devices and partitions.

Email notifications

mdadm provides the systemd service mdmonitor.service which can be useful for monitoring the health of your raid arrays and notifying you via email if anything goes wrong.

This service is special in that it cannot be manually activated like a regular service; mdadm will take care of activating it via udev upon assembling your arrays on system startup, but it will only do so if an email address has been configured for its notifications (see below).

To enable this functionality, edit /etc/mdadm.conf and define the email address:

Then, to verify that everything is working as it should, run the following command:

If the test is successful and the email is delivered, then you are done; the next time your arrays are reassembled, mdmonitor.service will begin monitoring them for errors.

Troubleshooting

If you are getting error when you reboot about «invalid raid superblock magic» and you have additional hard drives other than the ones you installed to, check that your hard drive order is correct. During installation, your RAID devices may be hdd, hde and hdf, but during boot they may be hda, hdb and hdc. Adjust your kernel line accordingly. This is what happened to me anyway.

Error: «kernel: ataX.00: revalidation failed»

If you suddenly (after reboot, changed BIOS settings) experience Error messages like:

Is does not necessarily mean that a drive is broken. You often find panic links on the web which go for the worst. In a word, No Panic. Maybe you just changed APIC or ACPI settings within your BIOS or Kernel parameters somehow. Change them back and you should be fine. Ordinarily, turning ACPI and/orACPI off should help.

Start arrays read-only

When an md array is started, the superblock will be written, and resync may begin. To start read-only set the kernel module md_mod parameter start_ro . When this is set, new arrays get an ‘auto-ro’ mode, which disables all internal io (superblock updates, resync, recovery) and is automatically switched to ‘rw’ when the first write request arrives.

To set the parameter at boot, add md_mod.start_ro=1 to your kernel line.

Or set it at module load time from /etc/modprobe.d/ file or from directly from /sys/ :

Recovering from a broken or missing drive in the raid

You might get the above mentioned error also when one of the drives breaks for whatever reason. In that case you will have to force the raid to still turn on even with one disk short. Type this (change where needed):

Now you should be able to mount it again with something like this (if you had it in fstab):

Now the raid should be working again and available to use, however with one disk short! So, to add that one disc partition it the way like described above in #Prepare the devices. Once that is done you can add the new disk to the raid by doing:

you probably see that the raid is now active and rebuilding.

You also might want to update your configuration (see: #Update configuration file).

Benchmarking

There are several tools for benchmarking a RAID. The most notable improvement is the speed increase when multiple threads are reading from the same RAID volume.

tiobench AUR [broken link: package not found] specifically benchmarks these performance improvements by measuring fully-threaded I/O on the disk.

bonnie++ tests database type access to one or more files, and creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format e-mail. The enclosed ZCAV program tests the performance of different zones of a hard drive without writing any data to the disk.

hdparm should not be used to benchmark a RAID, because it provides very inconsistent results.

Источник

Читайте также:  Калькулятор квадратных уравнений windows forms
Оцените статью