# Configure the System RAID

Ankit Sambhare
6 min readJan 5, 2021

Prerequisite:-

  • Centos 7 Machine
  • mdadm package installed
  • Two Data Volumes attached to the machine for performing the RAID

Steps to perform:-

  • First, check if the attached disks are available.
Attached disk info

Note: Here we can create the partition using both inbuilt utility fdisk and Parted but for the demo, I have already taken the backup of the partition tables created manually using the above-mentioned utilities and here with the help of the sfdisk we will restore the partition table with correct label and sizes.

Partition Table files:-

Partition Files of Devices

Contents of the Partition files:-

  • Device /dev/nvme1n1
/dev/nvme1n1
  • Device /dev/nvme2n1
/dev/nvme2n1
  • Create 3 partitions in /dev/nvme1n1 device
sfdisk /dev/nvme1n1 < nvme1n1.txt
  • Verify the partition info
[root@b1e95f64d31c ~]# lsblk /dev/nvme1n1
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
nvme1n1 259:0 0 2G 0 disk
├─nvme1n1p1 259:4 0 200M 0 part
├─nvme1n1p2 259:5 0 200M 0 part
└─nvme1n1p3 259:6 0 200M 0 part
  • Create 7 partitions in /dev/nvme2n1 with 3 as primary 1 as extended and 3 as logical
sfdisk /dev/nvme1n1 < nvme1n1.txt
  • Verify the partition table info
[root@b1e95f64d31c ~]# parted /dev/nvme2n1 print
Model: NVMe Device (nvme)
Disk /dev/nvme2n1: 2147MB
Sector size (logical/physical): 512B/512B
Partition Table: msdos
Disk Flags:

Number Start End Size Type File system Flags
1 1049kB 200MB 199MB primary raid
2 201MB 400MB 198MB primary raid
3 401MB 600MB 199MB primary raid
4 601MB 2100MB 1499MB extended lba
5 602MB 800MB 198MB logical raid
6 801MB 1000MB 199MB logical raid
7 1001MB 1200MB 198MB logical raid
  • Check for the mdadm utility is installed 👍
rpm -q mdadm
  • If not installed then run the below command 👇
yum install -y mdadm

Creating the Software RAID using mdadm.

  • Here we create a new device of RAID level 5 with 3 active devices and 3 spare devices for fault tolerance and backup.
mdadm -C /dev/md0 -l raid5 -n 3 /dev/nvme1n1p1 /dev/nvme1n1p2 /dev/nvme1n1p3 -x 3 /dev/nvme2n1p1 /dev/nvme2n1p2 /dev/nvme2n1p3
  • Verify the creation of the new RAID device
[root@b1e95f64d31c ~]# ls /dev/md*
/dev/md0
  • Details of the device can also be obtained using -D flag or --detail
[root@b1e95f64d31c ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Dec 31 05:32:24 2020
Raid Level : raid5
Array Size : 382976 (374.00 MiB 392.17 MB)
Used Dev Size : 191488 (187.00 MiB 196.08 MB)
Raid Devices : 3
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Thu Dec 31 05:34:39 2020
State : clean
Active Devices : 3
Working Devices : 6
Failed Devices : 0
Spare Devices : 3

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Name : b1e95f64d31c.mylabserver.com:0 (local to host b1e95f64d31c.mylabserver.com)
UUID : edc168fe:082e62d2:7183995d:7bd4e9ed
Events : 18

Number Major Minor RaidDevice State
0 259 2 0 active sync /dev/nvme1n1p1
1 259 3 1 active sync /dev/nvme1n1p2
6 259 4 2 active sync /dev/nvme1n1p3

3 259 5 - spare /dev/nvme2n1p1
4 259 6 - spare /dev/nvme2n1p2
5 259 7 - spare /dev/nvme2n1p3

🟡 — — — — — — 🟡— — — — —OR — — — — — 🟡 — — — — 🟡

[root@b1e95f64d31c ~]# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 nvme1n1p3[6] nvme2n1p3[5](S) nvme2n1p2[4](S) nvme2n1p1[3](S) nvme1n1p2[1] nvme1n1p1[0]
382976 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]

unused devices: <none>
  • To make the configuration persistent to reboot we will create the mdadm.conf file. So now we will scan our device with -s or --scan_ with -v to get the output and redirect it into the configuration file
# mdadm -D -s -v > /etc/mdadm.conf
  • Create Mount point for a newly created array
mkdir /mnt/data
mkdir /mnt/data/backup
  • Now we will create the filesystem on the array
mkfs -t ext4 /dev/md0
  • Now we mount this array /dev/md0and can check it using df -h also, we will create a new directory inside this after mounting called backup and copy the, etc contents inside it
mount /dev/md0 /mnt/data
mkdir /mnt/data/backup
cp -rf /etc/* /mnt/data/backup
  • To make it persist any reboot we need to add this entry to our /etc/fstab file so, for it we need theUUID of the array, we created which we will get from the below command.
# blkid
  • /etc/fstab Contents
/etc/fstab contents
  • Umount the mounted and try to read and mount the device array from fstab file
# Umount /mnt/data
# mount -a

Managing Failover and Recovery of RAID Devices

  • Now here we will fail one device and observe the rebuild status of a spare device taking its place. Make sure we don't fail 2 devices at a time or we will lose the data
[root@b1e95f64d31c ~]# mdadm -f  /dev/md0 /dev/nvme1n1p1
mdadm: set /dev/nvme1n1p1 faulty in /dev/md0
  • Now the rebuild process will start and spare devices takes place of the faulty drive
    1     259        3        1      active sync   /dev/nvme1n1p2 👈
6 259 4 2 active sync /dev/nvme1n1p3

0 259 2 - faulty /dev/nvme1n1p1 👈
3 259 5 - spare /dev/nvme2n1p1
4 259 6 - spare /dev/nvme2n1p2
  • Similarly, we will replace all the active devices with the spare one’s as above
/dev/md0:
Version : 1.2
Creation Time : Thu Dec 31 05:32:24 2020
Raid Level : raid5
Array Size : 382976 (374.00 MiB 392.17 MB)
Used Dev Size : 191488 (187.00 MiB 196.08 MB)
Raid Devices : 3
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Mon Jan 4 11:50:23 2021
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 3
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Rebuild Status : 20% complete

Name : b1e95f64d31c.mylabserver.com:0 (local to host b1e95f64d31c.mylabserver.com)
UUID : edc168fe:082e62d2:7183995d:7bd4e9ed
Events : 63

Number Major Minor RaidDevice State
5 259 7 0 active sync /dev/nvme2n1p3
3 259 5 1 active sync /dev/nvme2n1p1
4 259 6 2 spare rebuilding /dev/nvme2n1p2
  • Remove the failed devices one by one or together from the array
[root@b1e95f64d31c ~]# mdadm -r /dev/md0 /dev/nvme1n1p1 /dev/nvme1n1p2 /dev/nvme1n1p3
mdadm: hot removed /dev/nvme1n1p1 from /dev/md0
mdadm: hot removed /dev/nvme1n1p2 from /dev/md0
mdadm: hot removed /dev/nvme1n1p3 from /dev/md0
  • Add the new spare devices one by one or together in the array and verify the same
[root@b1e95f64d31c ~]# mdadm -a /dev/md0 /dev/nvme2n1p5 /dev/nvme2n1p6 /dev/nvme2n1p7
mdadm: added /dev/nvme2n1p5
mdadm: added /dev/nvme2n1p6
mdadm: added /dev/nvme2n1p7
[root@b1e95f64d31c ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Thu Dec 31 05:32:24 2020
Raid Level : raid5
Array Size : 382976 (374.00 MiB 392.17 MB)
Used Dev Size : 191488 (187.00 MiB 196.08 MB)
Raid Devices : 3
Total Devices : 6
Persistence : Superblock is persistent

Update Time : Mon Jan 4 12:00:34 2021
State : clean
Active Devices : 3
Working Devices : 6
Failed Devices : 0
Spare Devices : 3

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : resync

Name : b1e95f64d31c.mylabserver.com:0 (local to host b1e95f64d31c.mylabserver.com)
UUID : edc168fe:082e62d2:7183995d:7bd4e9ed
Events : 81

Number Major Minor RaidDevice State
5 259 7 0 active sync /dev/nvme2n1p3
3 259 5 1 active sync /dev/nvme2n1p1
4 259 6 2 active sync /dev/nvme2n1p2

6 259 9 - spare /dev/nvme2n1p5 👈
7 259 10 - spare /dev/nvme2n1p6 👈
8 259 11 - spare /dev/nvme2n1p7 👈
  • Add this new information back to the main configuration /etc/mdadm.conf to make it persistent
# mdadm -D -s -v >/etc/mdadm.conf

--

--