1
0
mirror of https://github.com/tiyn/wiki.git synced 2025-04-19 06:07:44 +02:00

Compare commits

...

3 Commits

Author SHA1 Message Date
3b91231c46 mdadm: added raid 5 2023-03-22 15:50:22 +01:00
ee57a84d37 mdadm: added raid 5 2023-03-22 15:49:17 +01:00
88931985c4 mdadm: added raid 5 2023-03-22 15:42:59 +01:00

View File

@ -1,6 +1,7 @@
# MDADM
`mdadm` is a utility to create and manage raid devices.
MDADM is a utility to create and manage raid devices.
For the rest of this entry `n` is the number of drives inside a raid device.
## Usage
@ -20,6 +21,9 @@ be a whole drive and the mdadm drive is called `/dev/md0`.
### Raid 1
Raid 1 creates a mirror with even amount of drives.
For `n=2` [raid 5](#raid-5) and raid 1 are basically the same.
The space efficiency is `1/n` and the fault tolerance is `n-1` drive failure.
The read perfromance is `n` and the write performance `1`.
#### Create raid 1 device
@ -31,11 +35,14 @@ If it is an uneven amount of disks in the raid, a disk will act as a spare disk.
#### Convert raid 1 to raid 5
Assuming the raid device is called `/dev/md0`.
Assuming the raid 1 device is called `/dev/md0`.
All other drives are part of the `md0` raid device.
Note that mostly raid 1 devices consisting of 2 drives should be converted to
[raid 5](#raid-5).
- Remove all drives but 2 by running `mdadm /dev/md0 --fail /dev/sda1` and
`mdadm /dev/md0 --remove /dev/sda1` where `sda1` is the drive to remove
- Remove all drives but 2 (if there are more drives than that) by running
`mdadm /dev/md0 --fail /dev/sda1` and `mdadm /dev/md0 --remove /dev/sda1`
where `sda1` is the drive to remove
- Make sure your raid 1 array has only 2 active drives by running
`mdadm --grow /dev/md0 -n 2`
- Now convert your raid 1 to a raid 5 device with `mdadm --grow /dev/md0 -l5`
@ -43,5 +50,16 @@ All other drives are part of the `md0` raid device.
`mdadm /dev/md0 --add /dev/sda1`
- Finally grow the active drive number to your needs (in this example 4)
`mdadm --grow /dev/md0 -n4`
- `mdadm` now reshapes the raid. You can monitor it by running
- MDADM now reshapes the raid. You can monitor it by running
`watch cat /proc/mdstat`
### Raid 5
Raid 5 creates a raid device with distributed parity and is set to have at least
3 drives.
The space efficiency is `1-(1/n)` and the fault tolerance is `1` drive failure.
The read perfromance is `n` and the write performance `1/4` for single sector
and `n-1` for full stripe.
In the special case of 2 drives in a raid 5 it is functionally the same as
[raid 1](#raid-1).