
Introduction
A Redundant Array of Independent Disks (RAID) is a data technology that combines multiple disk drives to create a single logical storage unit. You can configure a RAID array to improve performance, duplicate data, or both. Usually, the RAID level determines the redundancy or performance improvement and depends on the number of disk drives, configuration, and data distribution across the storage devices.
To increase capacity and performance, create a level, such as RAID 0, that strips and distributes data across multiple disk drives. If your objective is to prevent data loss in case of a storage device failure, create a level that duplicates data across multiple physical disk drives. Typical arrays that prevent data loss include RAID 1, RAID 5, RAID 6, and RAID 10.
A combination of RAID levels, such as RAID 1+0 (RAID 10), provides both performance improvement and data redundancy. The array combines the data stripping of RAID 0 and the mirroring of RAID 1.
You can create almost any RAID level on Ubuntu and other Linux distributions if you have the minimum number of disk drives. This article uses Ubuntu 22.04 LTS to explain how to use the mdadm utility to create various software RAID levels.
Prerequisites
To follow this guide, you need:
- An Ubuntu 22.04 server that supports multiple storage devices. This could be a Vultr Bare Metal server or a cloud server with multiple block storage volumes attached.
- A
sudouser on the Ubuntu server. - Multiple physical hard disks, preferably with the same or almost equal capacities and speed. If you are using a Vultr virtual machine, you can use block storage volumes that operate the same way as physical devices. The minimum number of disk drives depends on the RAID array level. For example, RAID 0 and RAID 1 require a minimum of two drives, while RAID 6 and RAID 10 require at least four devices.
Because we cannot cover all the RAID arrays in one article, this guide focuses on RAID levels 0, 1, 5, and 10.
1: Log in to Server
Connect to your server with SSH as a sudo user.
ssh username@hostIf working with previously partitioned disk drives, you need to delete existing partitions and create those that match your intended array size and configuration.
2: Check if RAID mounted
Check if there are previous RAID arrays and unmount them. If using new disk drives, skip this and go to step 3.
$ sudo cat /proc/mdstat Output
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
unused devices: <none> If there is an existing RAID array. Unmount it using the syntax;
$ sudo unmount /dev/mdxFor example, for a RAID name md0, use the command:
$ sudo umount /dev/md0To stop the RAID array md0, run:
$ sudo mdadm --stop /dev/md0Remove the array so that you can access and reuse the disk drives.
$ sudo mdadm --remove /dev/md0 3: Identify available disk drives
Run the following to identify the available disk drives, sizes, file systems, and mountpoints.
$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT Typical output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sr0 1024M rom
vda 50G disk
└─vda1 50G ext4 part /
vdb 160G disk
vdc 160G disk In addition to the drive vda with the operating system, we have two more devices, vdb and vdc.
4: Partition the disk drives
Before creating the RAID, it is good practice to partition disk drives such as vdb and vdc.
Create an empty partition table on vdb.
$ sudo parted -s /dev/vdb mklabel gpt Run the same command for the second disk drive, vdc.
$ sudo parted -s /dev/vdc mklabel gptIf you have more than two drives, use the above command for all of them while replacing vdc with the appropriate disk identifier.
Create the maximum partition on the drive. However, you can specify a smaller partition if you do not want to use the entire disk drive.
$ sudo parted -s /dev/vdb unit mib mkpart primary 0% -100%Repeat the command for all the drives by replacing vdb with the appropriate disk drive name.
5: Create software RAID
The syntax for creating any RAID level is:
$ sudo mdadm --create --verbose /dev/[ RAID array Name] --level=[RAID Level] --raid-devices=[Number of storage devices] [Storage Device Identifier] [Storage Device Identifier]The verbose flag, which displays the progress in real-time, is optional.
5.1: How to create RAID 0
RAID 0 requires a minimum of two storage devices.
Scan for available disk drives, partitions, and their identifiers.
$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINTOutput
NAME SIZE FSTYPE TYPE MOUNTPOINT
sr0 1024M rom
vda 50G disk
└─vda1 50G ext4 part /
vdb 160G disk
└─vdb1 160G part
vdc 160G disk
└─vdc1 160G part To create RAID 0 called md0 using the primary partitions vdb1 and vdc1, run the command;
$ sudo mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/vdb1 /dev/vdc1 Typical output
mdadm: chunk size defaults to 512K
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started. Check RAID status.
$ sudo cat /proc/mdstat Output
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid0 vdc1[1] vdb1[0]
335276032 blocks super 1.2 512k chunks
unused devices: <none> The output shows the active RAID level and disk drives/partition it uses.
To configure automatic assembly and mounting of the RAID 0 array on boot, go to step 6.
5.2: How to create RAID 1
RAID 1 requires a minimum of two storage devices.
Scan for available disk drives, partitions, and their identifiers.
$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT Output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sr0 1024M rom
vda 25G disk
└─vda1 25G ext4 part /
vdb 120G disk
vdc 120G diskAlthough you can use partitions, such as in RAID 0 above, this section shows how to create RAID 1 using 120 GB disk drives without partitioning them first.
$ sudo mdadm --create --verbose /dev/md0 --level=1 --raid-devices=2 /dev/vdb /dev/vdc Type `y' for yes when prompted.
Output
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device. If you plan to
store '/boot' on this device, please ensure that
our boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 125762560K
mdadm: automatically enabling write-intent bitmap on large array
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started. Check the RAID status.
$ sudo cat /proc/mdstat Typical output:
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid1 vdc[1] vdb[0]
125762560 blocks super 1.2 [2/2] [UU]
[=>...................] resync = 9.2% (11666368/125762560) finish=19.5min speed=97113K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk
unused devices: <none> To display the list of disk drives, size assigned to RAID, and status, run:
$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT Typical output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sr0 1024M rom
vda 25G disk
└─vda1 25G ext4 part /
vdb 120G linux_raid_member disk
└─md0 119.9G raid1
vdc 120G linux_raid_member disk
└─md0 119.9G raid1 To configure automatic assembly and mounting of the RAID 1 array on boot, go to step 6.
5.3: How to create RAID 5
RAID 5 requires a minimum of three storage devices.
Scan for available disk drives, partitions, and their identifiers.
$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINT Typical output
NAME SIZE FSTYPE TYPE MOUNTPOINT
sr0 1024M rom
vda 50G disk
└─vda1 50G ext4 part /
vdb 160G disk
vdc 160G disk
vdd 160G diskWe assume they are all new and thus create RAID 5 without prior partitioning.
$ sudo mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/vdb /dev/vdc /dev/vdd Check active RAID
$ sudo cat /proc/mdstatOutput
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid5 vdd[3] vdc[1] vdb[0]
335280128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 3.4% (5773616/167640064) finish=31.3min speed=86110K/sec
bitmap: 0/2 pages [0KB], 65536KB chunk To configure automatic assembly and mounting of the RAID 5 array on boot, go to step 6.
5.4: How to create RAID 10
RAID 10 requires a minimum of four storage devices.
Scan for available disk drives, partitions, and their identifiers.
$ lsblk -o NAME,SIZE,FSTYPE,TYPE,MOUNTPOINTOutput
NAME SIZE FSTYPE TYPE MOUNTPOINT
sr0 1024M rom
vda 50G disk
└─vda1 50G ext4 part /
vdb 160G disk
vdc 160G disk
vdd 160G disk
vde 160G diskCreate the RAID 10 array
$ sudo mdadm --create --verbose /dev/md0 --level=10 --raid-devices=4 /dev/vdb /dev/vdc /dev/vdd /dev/vdeCheck active RAID
$ sudo cat /proc/mdstatOutput
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md0 : active raid10 vde[3] vdd[2] vdc[1] vdb[0]
335280128 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
[>....................] resync = 2.6% (8807232/335280128) finish=32.4min speed=167884K/sec
bitmap: 3/3 pages [12KB], 65536KB chunk 6: Create and mount a file system on the RAID array
Once you configure a RAID array, you must create a file system and mount the array. The following instructions show you how to create the file system, mount the RAID array, and configure it to assemble and mount the RAID array when booting. The commands are the same for all levels.
Create the file system
The syntax for creating the file system is;
$ sudo mkfs –t [File system type] [RAID Device]For example, to create the Ext4 file system on RAID 1 we created above, run the command;
$ sudo mkfs -t ext4 /dev/md0 7: Mount the RAID array
To mount the RAID array, run:
$ sudo mount /dev/md0 /mnt/Check whether the new file system is accessible.
$ df -h -x devtmpfs -x tmpfs8: Configure automatic mount on boot
To ensure that the server automatically assembles and mounts the RAID array when booting, you need to edit the configuration file /etc/mdadm/mdadm.conf.
Use the command below to scan the system, identify the active array and automatically make the necessary changes to the `/etc/mdadm/mdadm.conf file.
$ sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.confIf you want the array loads early in the boot, update the initial RAM file system (initramfs).
$ sudo update-initramfs -uAdditionally, you must add the file system mount options to the etc/fstab file. Run the command below to get the array details and automatically add them to the etc/fstab file.
$ echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstabThe above commands ensure automounting of the RAID array when the machine boots and applies to all the RAID levels. To confirm, reboot the server and run the command below to get the RAID status.
$ sudo cat /proc/mdstatSummary
A RAID array is a storage technology that combines multiple physical disk drives to create a logical unit that offers data redundancy and recovery, performance improvement, or both. The choice of the RAID array level depends on whether you want to improve the read/write performance, add redundancy and prevent data loss in case of a drive failure. While the article focuses on a few RAID levels, you can use the instructions to create other arrays in Ubuntu 22.04 LTS and other Linux distributions.