1
0
mirror of https://github.com/tiyn/wiki.git synced 2025-04-19 22:17:45 +02:00

Compare commits

..

No commits in common. "3b91231c46d73614127dcdd9463aaee22b4a349b" and "6b946121598149aab9f9a08260858e6b1b272b27" have entirely different histories.

View File

@ -1,7 +1,6 @@
# MDADM # MDADM
MDADM is a utility to create and manage raid devices. `mdadm` is a utility to create and manage raid devices.
For the rest of this entry `n` is the number of drives inside a raid device.
## Usage ## Usage
@ -21,9 +20,6 @@ be a whole drive and the mdadm drive is called `/dev/md0`.
### Raid 1 ### Raid 1
Raid 1 creates a mirror with even amount of drives. Raid 1 creates a mirror with even amount of drives.
For `n=2` [raid 5](#raid-5) and raid 1 are basically the same.
The space efficiency is `1/n` and the fault tolerance is `n-1` drive failure.
The read perfromance is `n` and the write performance `1`.
#### Create raid 1 device #### Create raid 1 device
@ -35,14 +31,11 @@ If it is an uneven amount of disks in the raid, a disk will act as a spare disk.
#### Convert raid 1 to raid 5 #### Convert raid 1 to raid 5
Assuming the raid 1 device is called `/dev/md0`. Assuming the raid device is called `/dev/md0`.
All other drives are part of the `md0` raid device. All other drives are part of the `md0` raid device.
Note that mostly raid 1 devices consisting of 2 drives should be converted to
[raid 5](#raid-5).
- Remove all drives but 2 (if there are more drives than that) by running - Remove all drives but 2 by running `mdadm /dev/md0 --fail /dev/sda1` and
`mdadm /dev/md0 --fail /dev/sda1` and `mdadm /dev/md0 --remove /dev/sda1` `mdadm /dev/md0 --remove /dev/sda1` where `sda1` is the drive to remove
where `sda1` is the drive to remove
- Make sure your raid 1 array has only 2 active drives by running - Make sure your raid 1 array has only 2 active drives by running
`mdadm --grow /dev/md0 -n 2` `mdadm --grow /dev/md0 -n 2`
- Now convert your raid 1 to a raid 5 device with `mdadm --grow /dev/md0 -l5` - Now convert your raid 1 to a raid 5 device with `mdadm --grow /dev/md0 -l5`
@ -50,16 +43,5 @@ Note that mostly raid 1 devices consisting of 2 drives should be converted to
`mdadm /dev/md0 --add /dev/sda1` `mdadm /dev/md0 --add /dev/sda1`
- Finally grow the active drive number to your needs (in this example 4) - Finally grow the active drive number to your needs (in this example 4)
`mdadm --grow /dev/md0 -n4` `mdadm --grow /dev/md0 -n4`
- MDADM now reshapes the raid. You can monitor it by running - `mdadm` now reshapes the raid. You can monitor it by running
`watch cat /proc/mdstat` `watch cat /proc/mdstat`
### Raid 5
Raid 5 creates a raid device with distributed parity and is set to have at least
3 drives.
The space efficiency is `1-(1/n)` and the fault tolerance is `1` drive failure.
The read perfromance is `n` and the write performance `1/4` for single sector
and `n-1` for full stripe.
In the special case of 2 drives in a raid 5 it is functionally the same as
[raid 1](#raid-1).