Linux software RAID has nothing to do with hardware RAID controllers. You don’t need an add-on controller, and you don’t need the onboard controllers that come on most motherboards.
The RAID devices are virtual devices created from two or more real block devices. This allows multiple devices (typically disk drives or partitions thereof) to be combined into a single device to hold (for example) a single filesystem.
Some RAID levels include redundancy and so can survive some degree of device failure.
Currently, Linux supports LINEAR md devices, RAID0 (striping), RAID1 (mirroring), RAID4, RAID5, RAID6, RAID10, MULTIPATH, FAULTY, and CONTAINER.
MULTIPATH is not a Software RAID mechanism, but does involve multiple devices: each device is a path to one common physical storage device. New installations should not use md/multipath as it is not well supported and has no ongoing development. Use the Device Mapper based multipath-tools instead.
FAULTY is also not true RAID, and it only involves one device. It provides a layer over a true device that can be used to inject faults.
CONTAINER is different again. A CONTAINER is a collection of devices that are managed as a set. This is similar to the set of devices connected to a hardware RAID controller. The set of devices may contain a number of different RAID arrays each utilising some (or all) of the blocks from a number of the devices in the set. For example, two devices in a 5-device set might form a RAID1 using the whole devices. The remaining three might have a RAID5 over the first half of each device, and a RAID0 over the second half.
With a CONTAINER, there is one set of metadata that describes all of the arrays in the container. So when mdadm creates a CONTAINER device, the device just represents the metadata. Other normal arrays (RAID1 etc) can be created inside the container.
Linux Software RAID devices are implemented through the md (Multiple Devices) device driver.
You need to install mdadm which is used to create, manage, and monitor Linux software MD (RAID) devices.
1. Install the mdadm:
root@deb# apt-get install mdadm
2. Create the raid using a mdadm:
First run the fdisk command to create a partition. Type of this partition should be “Linux raid auto” you should take “fd” type.
In this example I am using: /dev/sdi, /dev/sdk, /dev/sdl and /dev/sdm for a raid10
Example output of fdisk:
root@deb# fdisk -l /dev/sdf
Disk /dev/sdf: 2147 MB, 2147483648 bytes
22 heads, 16 sectors/track, 11915 cylinders, total 4194304 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00000000
Device Boot Start End Blocks Id System
/dev/sdf1 63 4194303 2097120+ fd Linux raid autodetect
As you can see the partition sdf1 is in Linux raid autodetect type.
When you have done partitionig your disks, you can format them in this case I am formatting the partition to ext4 file system:
root@deb# mkfs.ext4 /dev/sdi1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=0 blocks, Stripe width=0 blocks
131072 inodes, 524280 blocks
26214 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=536870912
16 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912
Allocating group tables: done
Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done
Now I will create raid10 using 4 disks (sdi, sdk, sdl, sdm)
root@deb# mdadm --create /dev/md1 --level=10 --raid-devices=4 /dev/sdi1 /dev/sdk1 /dev/sdl1 /dev/sdm1
mdadm: /dev/sdi1 appears to contain an ext2fs file system
size=2097144K mtime=Thu Jan 1 01:00:00 1970
mdadm: /dev/sdk1 appears to contain an ext2fs file system
size=2097144K mtime=Thu Jan 1 01:00:00 1970
mdadm: /dev/sdl1 appears to contain an ext2fs file system
size=2097144K mtime=Thu Jan 1 01:00:00 1970
mdadm: /dev/sdm1 appears to contain an ext2fs file system
size=2097144K mtime=Thu Jan 1 01:00:00 1970
Continue creating array? y
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.
then format the md1:
root@deb# mkfs.ext4 /dev/md1
mke2fs 1.42 (29-Nov-2011)
Filesystem label=
OS type: Linux
Block size=4096 (log=2)
Fragment size=4096 (log=2)
Stride=128 blocks, Stripe width=256 blocks
262144 inodes, 1047808 blocks
52390 blocks (5.00%) reserved for the super user
First data block=0
Maximum filesystem blocks=1073741824
32 block groups
32768 blocks per group, 32768 fragments per group
8192 inodes per group
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
Now the raid10 is created, to chceck if everything was created do:
root@deb# mdadm --detail --scan
ARRAY /dev/md1 metadata=1.2 name=securelinx:1 UUID=4cc91a23:5d25f3a9:8aabbb97:99f152a5
more details:
root@deb# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Feb 6 15:16:04 2012
Raid Level : raid10
Array Size : 4191232 (4.00 GiB 4.29 GB)
Used Dev Size : 2095616 (2046.84 MiB 2145.91 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Feb 6 15:31:32 2012
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : securelinx:1 (local to host securelinx)
UUID : 4cc91a23:5d25f3a9:8aabbb97:99f152a5
Events : 18
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 161 1 active sync /dev/sdk1
2 8 177 2 active sync /dev/sdl1
3 8 193 3 active sync /dev/sdm1
or use this:
root@deb# cat /proc/mdstat
Personalities : [linear] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid10 sdm1[3] sdl1[2] sdk1[1] sdi1[0]
4191232 blocks super 1.2 512K chunks 2 near-copies [4/4] [UUUU]
unused devices: <none>
Now is time to add md1 to the fstab, to do that I am using the blkid command to get the UUID name of md1.
root@deb# blkid
/dev/md1: UUID="bf55ad2e-af61-4526-91f8-2f09b9364058" TYPE="ext4"
and will add to the fstab:
root@deb# echo "UUID=bf55ad2e-af61-4526-91f8-2f09b9364058 /mnt/md1 ext4 defaults 0 0" >> /etc/fstab
3. Remove the fail disk from the array.
I will scan all arrays to find the interesting for me:
root@deb# mdadm --detail --scan
ARRAY /dev/md/0 metadata=1.2 name=securelinx:0 UUID=0832adcb:bd2212a0:287b81fb:8fd24a1c
ARRAY /dev/md1 metadata=1.2 name=securelinx:1 UUID=dce5e951:8d94d477:61dc8f5b:759dfc71
the array what I am looking for is md1, more details of this array:
root@deb# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Feb 6 15:45:55 2012
Raid Level : raid10
Array Size : 4191232 (4.00 GiB 4.29 GB)
Used Dev Size : 2095616 (2046.84 MiB 2145.91 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Feb 6 15:49:50 2012
State : active
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : securelinx:1 (local to host securelinx)
UUID : dce5e951:8d94d477:61dc8f5b:759dfc71
Events : 18
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 161 1 active sync /dev/sdk1
2 8 177 2 active sync /dev/sdl1
3 8 193 3 active sync /dev/sdm1
4. Remove a disk from an array.
We can’t remove a disk directly from the array, unless it is failed,
so we first have to fail it (if the drive it is failed this is normally already in failed state and this step is not needed):
root@deb# mdadm /dev/md1 --fail /dev/sdm1 --remove /dev/sdm1
mdadm: set /dev/sdm1 faulty in /dev/md1
mdadm: hot removed /dev/sdm1 from /dev/md1
Is it removed ?
root@deb# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Feb 6 15:45:55 2012
Raid Level : raid10
Array Size : 4191232 (4.00 GiB 4.29 GB)
Used Dev Size : 2095616 (2046.84 MiB 2145.91 MB)
Raid Devices : 4
Total Devices : 3
Persistence : Superblock is persistent
Update Time : Mon Feb 6 15:52:42 2012
State : active, degraded
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : securelinx:1 (local to host securelinx)
UUID : dce5e951:8d94d477:61dc8f5b:759dfc71
Events : 20
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 161 1 active sync /dev/sdk1
2 8 177 2 active sync /dev/sdl1
3 0 0 3 removed
Yes, it is. Now go to a shop to buy a new hard drive.
5. Add a hard drive to an array.
As above, create the partition on disk, format the disk and add the disk to the array:
root@deb# mdadm /dev/md1 --add /dev/sdn1
mdadm: added /dev/sdn1
Check if the sdn1 was added to the md1 array:
root@deb# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Feb 6 15:45:55 2012
Raid Level : raid10
Array Size : 4191232 (4.00 GiB 4.29 GB)
Used Dev Size : 2095616 (2046.84 MiB 2145.91 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Feb 6 16:03:49 2012
State : clean
Active Devices : 4
Working Devices : 4
Failed Devices : 0
Spare Devices : 0
Layout : near=2
Chunk Size : 512K
Name : securelinx:1 (local to host securelinx)
UUID : dce5e951:8d94d477:61dc8f5b:759dfc71
Events : 63
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 161 1 active sync /dev/sdk1
2 8 177 2 active sync /dev/sdl1
4 8 209 3 active sync /dev/sdn1
Cool 🙂
During added the disk to the array, look at /proc/mdstat and you should see this:
root@deb# cat /proc/mdstat
Personalities : [linear] [raid1] [raid6] [raid5] [raid4] [raid10]
md1 : active raid10 sdn1[4] sdl1[2] sdk1[1] sdi1[0]
4191232 blocks super 1.2 512K chunks 2 near-copies [4/3] [UUU_]
[>....................] recovery = 4.7% (99008/2095616) finish=1.0min speed=33002K/sec
or type mdadm –detail /dev/md1 and you should see this:
root@deb# mdadm --detail /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Mon Feb 6 15:45:55 2012
Raid Level : raid10
Array Size : 4191232 (4.00 GiB 4.29 GB)
Used Dev Size : 2095616 (2046.84 MiB 2145.91 MB)
Raid Devices : 4
Total Devices : 4
Persistence : Superblock is persistent
Update Time : Mon Feb 6 16:09:43 2012
State : clean, degraded, recovering
Active Devices : 3
Working Devices : 4
Failed Devices : 0
Spare Devices : 1
Layout : near=2
Chunk Size : 512K
Rebuild Status : 20% complete
Name : securelinx:1 (local to host securelinx)
UUID : dce5e951:8d94d477:61dc8f5b:759dfc71
Events : 70
Number Major Minor RaidDevice State
0 8 129 0 active sync /dev/sdi1
1 8 161 1 active sync /dev/sdk1
2 8 177 2 active sync /dev/sdl1
4 8 209 3 spare rebuilding /dev/sdn1
6. Remove an array.
To remove an array just type:
mdadm --stop /dev/md1
mdadm --remove /dev/md1
6. SHORT format:
Create an array:
root@deb# mdadm --create /dev/md1 --level=10 --raid-devices=4 /dev/sdi1 /dev/sdk1 /dev/sdl1 /dev/sdm1
Scan all arrays:
root@deb# mdadm --detail --scan
Details of an array:
root@deb# mdadm --detail /dev/md1
Remove failed disk from an array:
root@deb# mdadm /dev/md1 --fail /dev/sdm1 --remove /dev/sdm1
Add a disk to an array:
root@deb# mdadm /dev/md1 --add /dev/sdk1
Remove an array:
root@deb# mdadm --stop /dev/md1
root@deb# mdadm --remove /dev/md1
root@deb# mdadm --zero-superblock /dev/sd[iklm]1
–zero-superblock will delete the superblock from all drives in the array
edit /etc/mdadm/mdamd.conf to delete any rows related to deleted array