- LVM on software RAID
- Contents
- Introduction
- Swap space
- Boot loader
- Installation
- Load kernel modules
- Prepare the hard drives
- Partition hard drives
- RAID installation
- Synchronization
- Scrubbing
- LVM installation
- Create physical volumes
- Create the volume group
- Create logical volumes
- Update RAID configuration
- Prepare hard drive
- Configure system
- mkinitcpio.conf
- Conclusion
- Install the bootloader on the Alternate Boot Drives
- Syslinux
- GRUB legacy
- Archive your filesystem partition scheme
- Management
- HOW TO CONFIGURE LINUX LVM (LOGICAL VOLUME MANAGER) USING SOFTWARE RAID 5
- Introduction
- Follow the below Steps to Configure Linux LVM on Software RAID 5 Partition :
- Configure Software RAID 5
- Configure Linux LVM on Software RAID 5 Partition
LVM on software RAID
This article will provide an example of how to install and configure Arch Linux with Logical Volume Manager (LVM) on top of a software RAID.
Contents
Introduction
This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.
Although RAID and LVM may seem like analogous technologies they each present unique features. This article uses an example with three similar 1TB SATA hard drives. The article assumes that the drives are accessible as /dev/sda , /dev/sdb , and /dev/sdc . If you are using IDE drives, for maximum performance make sure that each drive is a master on its own separate channel.
This article or section needs language, wiki syntax or style improvements. See Help:Style for reference.
LVM Logical Volumes | / | /var | /swap | /home |
LVM Volume Groups | /dev/VolGroupArray |
RAID Arrays | /dev/md0 | /dev/md1 |
Physical Partitions | /dev/sda1 | /dev/sdb1 | /dev/sdc1 | /dev/sda2 | /dev/sdb2 | /dev/sdc2 |
Hard Drives | /dev/sda | /dev/sdb | /dev/sdc |
Swap space
Many tutorials treat the swap space differently, either by creating a separate RAID1 array or a LVM logical volume. Creating the swap space on a separate array is not intended to provide additional redundancy, but instead, to prevent a corrupt swap space from rendering the system inoperable, which is more likely to happen when the swap space is located on the same partition as the root directory.
Boot loader
This tutorial will use Syslinux instead of GRUB. GRUB when used in conjunction with GPT requires an additional BIOS boot partition.
GRUB supports the default style of metadata currently created by mdadm (i.e. 1.2) when combined with an initramfs, which has replaced in Arch Linux with mkinitcpio. Syslinux only supports version 1.0, and therefore requires the —metadata=1.0 option.
Some boot loaders (e.g. GRUB Legacy, LILO) will not support any 1.x metadata versions, and instead require the older version, 0.90. If you would like to use one of those boot loaders make sure to add the option —metadata=0.90 to the /boot array during RAID installation.
Installation
Obtain the latest installation media and boot the Arch Linux installer as outlined in Getting and installing Arch.
Load kernel modules
Load the appropriate RAID (e.g. raid0 , raid1 , raid5 , raid6 , raid10 ) and LVM (i.e. dm-mod ) modules. The following example makes use of RAID1 and RAID5.
Prepare the hard drives
Each hard drive will have a 200 MiB /boot partition, 2048 MiB /swap partition, and a / partition that takes up the remainder of the disk.
The boot partition must be RAID1; i.e it cannot be striped (RAID0) or RAID5, RAID6, etc.. This is because GRUB does not have RAID drivers. Any other level will prevent your system from booting. Additionally, if there is a problem with one boot partition, the boot loader can boot normally from the other two partitions in the /boot array.
Partition hard drives
We will use gdisk to create three partitions on each of the three hard drives (i.e. /dev/sda , /dev/sdb , /dev/sdc ):
RAID installation
After creating the physical partitions, you are ready to setup the /boot, /swap, and / arrays with mdadm . It is an advanced tool for RAID management that will be used to create a /etc/mdadm.conf within the installation environment.
Create the / array at /dev/md0 :
Create the /swap array at /dev/md1 :
Create the /boot array at /dev/md2 :
Synchronization
After you create a RAID volume, it will synchronize the contents of the physical partitions within the array. You can monitor the progress by refreshing the output of /proc/mdstat ten times per second with:
Further information about the arrays is accessible with:
Once synchronization is complete the State line should read clean . Each device in the table at the bottom of the output should read spare or active sync in the State column. active sync means each device is actively in the array.
Scrubbing
It is good practice to regularly run data scrubbing to check for and fix errors.
To initiate a data scrub:
As with many tasks/items relating to mdadm, the status of the scrub can be queried:
To stop a currently running data scrub safely:
When the scrub is complete, admins may check how many blocks (if any) have been flagged as bad:
The check operation scans the drives for bad sectors and mismatches. Bad sectors are automatically repaired. If it finds mismatches, i.e., good sectors that contain bad data (the data in a sector does not agree with what the data from another disk indicates that it should be, for example the parity block + the other data blocks would cause us to think that this data block is incorrect), then no action is taken, but the event is logged (see below). This «do nothing» allows admins to inspect the data in the sector and the data that would be produced by rebuilding the sectors from redundant information and pick the correct data to keep.
General Notes on Scrubbing
It is a good idea to set up a cron job as root to schedule a periodic scrub. See raid-check AUR which can assist with this.
RAID1 and RAID10 Notes on Scrubbing
Due to the fact that RAID1 and RAID10 writes in the kernel are unbuffered, an array can have non-0 mismatch counts even when the array is healthy. These non-0 counts will only exist in transient data areas where they do not pose a problem. However, since we cannot tell the difference between a non-0 count that is just in transient data or a non-0 count that signifies a real problem. This fact is a source of false positives for RAID1 and RAID10 arrays. It is however recommended to still scrub to catch and correct any bad sectors there might be in the devices.
LVM installation
This section will convert the two RAIDs into physical volumes (PVs). Then combine those PVs into a volume group (VG). The VG will then be divided into logical volumes (LVs) that will act like physical partitions (e.g. / , /var , /home ). If you did not understand that make sure you read the LVM Introduction section.
Create physical volumes
Make the RAIDs accessible to LVM by converting them into physical volumes (PVs) using the following command. Repeat this action for each of the RAID arrays created above.
Confirm that LVM has added the PVs with:
Create the volume group
Next step is to create a volume group (VG) on the PVs.
Create a volume group (VG) with the first PV:
Confirm that LVM has added the VG with:
Create logical volumes
In this example we will create separate / , /var , /swap , /home LVs. The LVs will be accessible as /dev/VolGroupArray/ .
Create a /var LV:
Create a /home LV that takes up the remainder of space in the VG:
Confirm that LVM has created the LVs with:
Update RAID configuration
Since the installer builds the initrd using /etc/mdadm.conf in the target system, you should update that file with your RAID configuration. The original file can simply be deleted because it contains comments on how to fill it correctly, and that is something mdadm can do automatically for you. So let us delete the original and have mdadm create you a new one with the current setup:
Prepare hard drive
Follow the directions outlined the in #Installation section until you reach the Prepare Hard Drive section. Skip the first two steps and navigate to the Manually Configure block devices, filesystems and mountpoints page. Remember to only configure the PVs (e.g. /dev/VolGroupArray/lvhome ) and not the actual disks (e.g. /dev/sda1 ).
Configure system
mkinitcpio.conf
mkinitcpio can use a hook to assemble the arrays on boot. For more information see mkinitcpio Using RAID. Add the mdadm_udev and lvm2 hooks to the HOOKS array in /etc/mkinitcpio.conf after udev .
Conclusion
Once it is complete you can safely reboot your machine:
Install the bootloader on the Alternate Boot Drives
Once you have successfully booted your new system for the first time, you will want to install the bootloader onto the other two disks (or on the other disk if you have only 2 HDDs) so that, in the event of disk failure, the system can be booted from any of the remaining drives (e.g. by switching the boot order in the BIOS). The method depends on the bootloader system you are using:
Syslinux
Log in to your new system as root and do:
Syslinux will deal with installing the bootloader to the MBR on each of the members of the RAID array:
GRUB legacy
Log in to your new system as root and do:
Archive your filesystem partition scheme
Now that you are done, it is worth taking a second to archive off the partition state of each of your drives. This guarantees that it will be trivially easy to replace/rebuild a disk in the event that one fails. See fdisk#Backup and restore partition table.
Management
For further information on how to maintain your software RAID or LVM review the RAID and LVM aritcles.
Источник
HOW TO CONFIGURE LINUX LVM (LOGICAL VOLUME MANAGER) USING SOFTWARE RAID 5
by Balamukunda Sahu · Published June 14, 2017 · Updated June 14, 2017
HOW TO CONFIGURE LINUX LVM (LOGICAL VOLUME MANAGER) USING SOFTWARE RAID 5
Introduction
In this article we are going to learn How to Configure Linux LVM in Software RAID 5 Partition. As we all know that Software RAID 5 and LVM both are one of the most useful and major features of Linux. RAID 5 uses striping with parity technique to store the data in hard disk’s. and Linux LVM (Logical Volume Manager) is used to Extend, Resize, Rename the Logical Volumes. So the purpose behind the configuration of Linux LVM on RAID 5 partition is we can take benefit of both services and can make data more secure. Refer the Diagram Below :
HOW TO CONFIGURE LINUX LVM (LOGICAL VOLUME MANAGER) USING SOFTWARE RAID 5
For more reference on Linux LVM and Software RAID Read below articles :
- HOW TO CONFIGURE RAID 5 (SOFTWARE RAID) IN LINUX USING MDADM
- HOW TO INCREASE EXISTING SOFTWARE RAID 5 STORAGE CAPACITY IN LINUX
- HOW TO CONFIGURE SOFTWARE RAID 1 (DISK MIRRORING) USING MDADM IN LINUX
Follow the below Steps to Configure Linux LVM on Software RAID 5 Partition :
Configure Software RAID 5
As a First step we have to configure Software RAID 5. as we all know that we required minimum 3 hard disks to configure the same. Here I have three hard disk’s i.e. /dev/sdb , /dev/sdc and /dev/sdd. Refer the sample output below.
So let’s go ahead and create partitions on each hard disk and change the partition id for Software RAID i.e. “fd“.
Partitioning the Disk : /dev/sdb
Partitioning the Disk : /dev/sdc
Partitioning the Disk : /dev/sdd
So we have successfully created three partitions i.e. /dev/sdb1, /dev/sdc1, /dev/sdd1 and changed its partition ID for Software RAID. Refer the sample output below.
Now our next step is to create and start Software RAID 5 array. To do so refer the below command.
After creating and started the Software RAID 5 array you will get a new Partition in your partition List. refer the output below.
To check the details of Software RAID 5 partition you can use mdadm command with argument – -detail. Refer the command below.
After configure the Software RAID 5 we have to save the configurations in /etc/mdadm.conf file otherwise when you restart the system you will lost all your configurations. To do so you can use the below command.
Confirm the saved configurations of Software RAID 5 in /etc/mdadm.conf file.
Configure Linux LVM on Software RAID 5 Partition
Now we are all set to configure Linux LVM (Logical Volume Manager) on Software RAID 5 partition. we have the RAID 5 partition i.e. /dev/md0. Let’s go ahead and create Physical Volume using the RAID 5 partition i.e. /dev/md0.
You can check the details of Physical Volume using pvdisplay command. Refer the sample output below.
After creating the Physical Volume now our second step toward Linux LVM Configuration is we have to create Volume Group using the Physical Volume. To do so we have to use vgcreate command.
Here I am creating my Volume Group with name vgroup001 using vgcreate command. Refer the Sample Output below.
Now we have Volume Group i.e. vgroup001. So let’s go ahead and create Logical Volumes. Here I am going to create two logical volumes with name lvolume001 and lvolume002.
Creating First Logical Volume i.e. lvolume001 (of Size – 2 GB) :
Creating First Logical Volume i.e. lvolume002 (of Size – 1 GB) :
To check the details of Logical Volumes you can use lvdisplay command. Refer the sample output below.
After creating the Logical Volumes we have to format both of them to create File Systems. Here I am formatting my first Logical Volume with using ext4 file system.
Now create a directory and give appropriate permissions to mount the Logical Volume. Here I am creating a directory named lvm and giving full access to all.
You can mount the Logical Volume temporarily using below command.
Confirm the Mount Points.
For Permanent Mounting you have to make an entry in /etc/fstab file. Here I made a entry as per my LVM setup. Refer the sample output below.
Now refresh all mount points using below command.
You can also Un-mount and then Mount the Logical Volume again by using below command.
As you can see below now the lvolume001 is ready to store data.
Now let’s go ahead and format our second Logical Volume i.e. lvolume002 Refer the command below.
Create a directory named lvm2 and then temporarily mount the Logical Volume using below command.
Confirm the Mount Points.
For permanent mounting enter the below line in /etc/fstab file.
After creating two Logical Volumes let’s check what changes happened in Physical Volume and Volume Group.
As you can see below Total PE of Physical Volume is 1500 and available Free PE is 732. PE stands for Physical Extent.For information on Linux LVM I have already explained a article on LVM (Logical Volume Manager) Configuration .
Let’s check the Volume Group information after creating two Logical Volumes.
As you can see below Cur LV = 2 (Current Logical Volumes) , Open LV = 2 (Open Logical Volumes) , Total PE = 1500 , Available Free PE = 732
After complete configuration of LVM we have to save the Volume Group configurations and have to active the Logical Volumes. we can do so by using vgchange command. Refer the command below.
Now I want to test something Interesting. Let’s first store some data on both Logical Volumes. Here I am creating some files in both Logical Volumes i.e. in /lvm and /lvm2.
Creating Files in First Logical Volume :
Creating Files in Second Logical Volume :
Actually I want to know what if one harddisk got failure out of three hard disk on Software RAID 5 and what impact will happen to my LVM and available data. For that I want to make one hard disk fail. To do so refer the below command. Here I am failing the hard disk /dev/sdb1.
Confirm the Failure Hard disk
Now remove the failure Hard disk using below command
Now we have to add a New hard disk as a replacement of Faulty Hard disk. Here I have Hard disk i.e. /dev/sde. So to add the new hard disk we have follow the same process what we did before to configure Software RAID 5.
So to create partition in /dev/sde hard disk and change the partition ID for Software RAID i.e. “fd”.
Now add the new hard disk in Software RAID 5 using below command.
Confirm the Software RAID 5 partition if hard disk properly added or not by using below command.
Then refer the mount table and check the mount points.
As you can see on above output our both Logical Volumes are safe and looks good. Now let’s check the data.
As you can see below data is also safe and we haven’t missed any data.
If you found this article useful then Like Us, Share Us, Subscribe our Newsletter OR if you have something to say then feel free to comment on the comment box below.
Источник