Software Raid

Software Raid

Software raid on linux you can create with mdadm or with LVM.

Raid with mdadm

 yum install mdadm -y

Now we check that our device /dev/sdb and /dev/sdc are accessible and without superblock informations:
[root@client boot]# lsblk -fs
NAME        FSTYPE      LABEL UUID                                   MOUNTPOINT
sda1        xfs               a2a5d747-5ca9-4ae8-b325-c06042ad10c7   /boot
└─sda                                                                
sdb                                                                  
sdc                                                                  
sr0                                                                  
centos-root xfs               a2da5a79-2272-4bef-8b48-bdac794d859a   /
└─sda2      LVM2_member       yrdZq9-y6dI-jLSA-tyLd-psER-v5bQ-hXEzzx 
  └─sda                                                              
centos-swap swap              db3864a5-d37f-4b39-adea-38655f3c13e2   [SWAP]
└─sda2      LVM2_member       yrdZq9-y6dI-jLSA-tyLd-psER-v5bQ-hXEzzx 
  └─sda                                                              

When creating a raid for boot partition , use parameter ‚–metadata 1.0‘ because booting from metadata 1.2 requires booting from initramfs (so you will need to regenerate it).
Now we can create a raid 1 device :
#mdadm --create --verbose /dev/md0 --metadata 1.0 --level=mirror --raid-devices=2 /dev/sdb /dev/sdc 

For raid 0 use:
--level=stripe #creates raid 0
mdadm --detail --scan >> /etc/mdadm.conf #saves raid configuration
#cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sdc[1] sdb[0]
      1048512 blocks super 1.0 [2/2] [UU]
      
unused devices: 

To make the device /dev/md0 usable create filesystem, update /etc/fstab and reboot.

Stopping Raid:

#mdadm --stop /dev/md0
mdadm: stopped /dev/md0

Automatically start all raids:
# mdadm --assemble --scan
mdadm: /dev/md0 has been started with 2 drives.

#OR

#mdadm --assemble /dev/md0 --uuid=eb5c2895:55fc0bee:4fc1e7d4:6646d120
mdadm: /dev/md0 has been started with 2 drives.

#other commands

mdadm --zero-superblock /dev/sdb #clears superblock from a device

mdadm --add /dev/md0 /dev/sdd #adds a new disk to raid when 1 was failed

mdadm --detail /dev/md0

mdadm --manage --fail /dev/md0 /dev/sdb #test the fail of one disk

In case you didn’t saved your configuration to /etc/mdadm.conf you should use
mdadm --examine -scan

Now if you try to remove one of your disks you can still see your data or your computer would be able to boot.

Raid with LVM
First we need to label our two disks /dev/sdb and /dev/sdc as a physical volumes. Than we create volume group from these two disks and finally logical volume mirrored to these 2 disks with raid1.

#pvcreate /dev/sdb
#pvcreate /dev/sdc
#vgcreate mirror /dev/sdb /dev/sdc
#lvcreate  -l 100%FREE  -m1 -n raid1 mirror
#lvdisplay
--- Logical volume ---
  LV Path                /dev/mirror/raid1
  LV Name                raid1
  VG Name                mirror
  LV UUID                51xO74-qtkA-1aBd-drE0-EuXA-NkiM-VfyfQA
  LV Write Access        read/write
  LV Creation host, time client.local, 2018-10-11 04:14:09 -0400
  LV Status              available
  # open                 0
  LV Size                1016,00 MiB
  Current LE             254
  Mirrored volumes       2
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:6

#mkfs.ext4 /dev/mapper/mirror-raid1
#mount /dev/mapper/mirror-raid1 /mnt
#blkid
/dev/sdb: UUID="q5XgmS-CjpD-UpEU-K0kN-TP9S-LJgE-Ay38C8" TYPE="LVM2_member" 
/dev/sdc: UUID="tZhB6j-UCSL-O9wh-zCL2-RpCV-YwJQ-21sLfp" TYPE="LVM2_member" 
/dev/mapper/mirror-raid1_rimage_0: UUID="d80828a7-a267-4e9d-b497-412e61b5f679" TYPE="ext4" 
/dev/mapper/mirror-raid1_rimage_1: UUID="d80828a7-a267-4e9d-b497-412e61b5f679" TYPE="ext4" 
/dev/mapper/mirror-raid1: UUID="d80828a7-a267-4e9d-b497-412e61b5f679" TYPE="ext4" 

Now you can see LVM have created rimage_0 and rimage_1 with the same UUID. This UUID you can use as a mount device to have your data doubled..

With lvs command you can see the status of your volume groups and the status of their synchronization.

#lvs 

After your disk is faulty and your computer ends with and emergency mode due to impossibility of recreating raid volume, login as a root and activate volume groups with command
#vgchange -ay

Now your raid volume will be activated with only 1 disk, but you will be able to see your data.
Than you can convert your mirror device to linear one:
#vgreduce --removemissing mirror --force
#lvconvert -m 0 /dev/mapper/mirror-raid1

After you add a new HDD you can recreate mirror again:
#pvcreate /dev/sdc
#vgextend mirror /dev/sdc
#lvconvert -m 1 /dev/mapper/mirror-raid1 /dev/sdb /dev/sdc
Are you sure you want to convert linear LV mirror/raid1 to raid1 with 2 images enhancing resilience? [y/n]: y
  Logical volume mirror/raid1 successfully converted.

You have recreated your faulty mirror raid.
Enjoy 😉