R6900 G5 VMD Raid 5 disk remove action
1.1. Current Status
```bash
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean
Active Devices : 33
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
0 259 2 2 active sync /dev/nvme2n1
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme0n1[2] nvme1n1[1] nvme2n1[0]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/3] [UUU]
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
15603 blocks super external:imsm
unused devices: <none>
```
1.2. Hotplug disk
```bash
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 2
State : clean, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
- 0 0 2 removed
```
1.2. Inster the disk again
```bash
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 0% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
```
1.3. disk auto rebuilding
```bash
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 0% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme2n1[3] nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
[>....................] recovery = 0.7% (11696128/1484667904) finish=121.9min speed=201368K/sec
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
15603 blocks super external:imsm
unused devices: <none>
1.4. Rebuilding complete
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 98% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme2n1[3] nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/3] [UUU]
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
15603 blocks super external:imsm
unused devices: <none>
```
### 2. R6900 G5 VMD command remove and add disk
```bash 原始记录,未作修改
root@u22:~# mdadm --help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm --create device options...
Create a new array from unused devices.
mdadm --assemble device options...
Assemble a previously created array.
mdadm --build device options...
Create or assemble an array without metadata.
mdadm --manage device options...
make changes to an existing array.
mdadm --misc options... devices
report on or modify various md related devices.
mdadm --grow options device
resize/reshape an active array
mdadm --incremental device
add/remove a device to/from an array as appropriate
mdadm --monitor options...
Monitor one or more array for significant changes.
mdadm device options...
Shorthand for --manage.
Any parameter that does not start with '-' is treated as a device name
or, for --examine-bitmap, a file name.
The first such name is often the name of an md device. Subsequent
names are often names of component devices.
For detailed help on the above major modes use --help after the mode
e.g.
mdadm --assemble --help
For general help on options use
mdadm --help-options
root@u22:~# mdam --manage --help
Command 'mdam' not found, did you mean:
command 'mda' from deb mailutils-mda (1:3.14-1)
command 'mdadm' from deb mdadm (4.2-0ubuntu1)
Try: apt install <deb name>
root@u22:~# mdadm --manage --help
Usage: mdadm [mode] arraydevice [options] <component devices...>
This usage is for managing the component devices within an array.
The --manage option is not needed and is assumed if the first argument
is a device name or a management option.
The first device listed will be taken to be an md array device, any
subsequent devices are (potential) components of that array.
Options that are valid with management mode are:
--add -a : hotadd subsequent devices to the array
--re-add : subsequent devices are re-added if there were
: recent members of the array
--remove -r : remove subsequent devices, which must not be active
--fail -f : mark subsequent devices a faulty
--set-faulty : same as --fail
--replace : mark device(s) to be replaced by spares. Once
: replacement completes, device will be marked faulty
--with : Indicate which spare a previous '--replace' should
: prefer to use
--run -R : start a partially built array
--stop -S : deactivate array, releasing all resources
--readonly -o : mark array as readonly
--readwrite -w : mark array as readwrite
root@u22:~# mdadm --manage -r /dev/md126
md126 md126p1 md126p2 md126p3
root@u22:~# --manage -r /dev/md126
md126 md126p1 md126p2 md126p3
root@u22:~# mdadm --manage -r /dev/md126 /dev/nvme2n1
mdadm: /dev/nvme2n1 does not appear to be an md device
root@u22:~# mdadm --manage -f /dev/md126 /dev/nvme2n1
mdadm: set /dev/nvme2n1 faulty in /dev/md126
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 2
State : active, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
- 0 0 2 removed
root@u22:~# mdadm --manage --re-add /dev/md126 /dev/nvme2n1
mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container
root@u22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 61.9M 1 loop /snap/core20/1405
loop1 7:1 0 79.9M 1 loop /snap/lxd/22923
loop3 7:3 0 44.4M 1 loop /snap/snapd/23545
loop4 7:4 0 63.7M 1 loop /snap/core20/2434
loop5 7:5 0 89.4M 1 loop /snap/lxd/31333
sda 8:0 0 1.3T 0 disk
sdb 8:16 0 446.6G 0 disk
├─sdb1 8:17 0 100M 0 part
├─sdb5 8:21 0 4G 0 part
├─sdb6 8:22 0 4G 0 part
├─sdb7 8:23 0 119.9G 0 part
└─sdb8 8:24 0 318.6G 0 part
sdc 8:32 0 24.4T 0 disk
└─sdc1 8:33 0 24.4T 0 part
nvme0n1 259:0 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
nvme1n1 259:1 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
nvme2n1 259:2 0 1.5T 0 disk
└─md127 9:127 0 0B 0 md
root@u22:~# mdadm --manage --add /dev/md126 /dev/nvme2n1
mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container
root@u22:~# mdadm --manage --replace /dev/md126 /dev/nvme2n1
mdadm: Cannot replace disks in a 'member' array, perform this operation on the parent container
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 2
State : active, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
- 0 0 2 removed
root@u22:~# mdadm --manage --remove /dev/md126 /dev/nvme2n1
mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container
root@u22:~# mdadm --add /dev/md126 /dev/nvme2n1
mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container
root@u22:~# ls /dev/md/
imsm0 Volume0_0 Volume0_0p1 Volume0_0p2 Volume0_0p3
root@u22:~# ls /dev/md/imsm0
/dev/md/imsm0
root@u22:~# ls /dev/md/imsm0 -l
lrwxrwxrwx 1 root root 8 Feb 13 01:46 /dev/md/imsm0 -> ../md127
root@u22:~# ls /dev/md/imsm0 -l^C
root@u22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 61.9M 1 loop /snap/core20/1405
loop1 7:1 0 79.9M 1 loop /snap/lxd/22923
loop3 7:3 0 44.4M 1 loop /snap/snapd/23545
loop4 7:4 0 63.7M 1 loop /snap/core20/2434
loop5 7:5 0 89.4M 1 loop /snap/lxd/31333
sda 8:0 0 1.3T 0 disk
sdb 8:16 0 446.6G 0 disk
├─sdb1 8:17 0 100M 0 part
├─sdb5 8:21 0 4G 0 part
├─sdb6 8:22 0 4G 0 part
├─sdb7 8:23 0 119.9G 0 part
└─sdb8 8:24 0 318.6G 0 part
sdc 8:32 0 24.4T 0 disk
└─sdc1 8:33 0 24.4T 0 part
nvme0n1 259:0 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
nvme1n1 259:1 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
nvme2n1 259:2 0 1.5T 0 disk
└─md127 9:127 0 0B 0 md
root@u22:~# mdadm --help
mdadm is used for building, managing, and monitoring
Linux md devices (aka RAID arrays)
Usage: mdadm --create device options...
Create a new array from unused devices.
mdadm --assemble device options...
Assemble a previously created array.
mdadm --build device options...
Create or assemble an array without metadata.
mdadm --manage device options...
make changes to an existing array.
mdadm --misc options... devices
report on or modify various md related devices.
mdadm --grow options device
resize/reshape an active array
mdadm --incremental device
add/remove a device to/from an array as appropriate
mdadm --monitor options...
Monitor one or more array for significant changes.
mdadm device options...
Shorthand for --manage.
Any parameter that does not start with '-' is treated as a device name
or, for --examine-bitmap, a file name.
The first such name is often the name of an md device. Subsequent
names are often names of component devices.
For detailed help on the above major modes use --help after the mode
e.g.
mdadm --assemble --help
For general help on options use
mdadm --help-options
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
15603 blocks super external:imsm
unused devices: <none>
root@u22:~# mdadm --manage --replace /dev/md/imsm0 /dev/nvme2n1
mdadm: --replace only supported for native metadata (0.90 or 1.x)
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
15603 blocks super external:imsm
unused devices: <none>
root@u22:~# mdadm --manage --remove /dev/md/imsm0 /dev/nvme2n1
mdadm: hot removed /dev/nvme2n1 from /dev/md/imsm0
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
md127 : inactive nvme1n1[1](S) nvme0n1[0](S)
10402 blocks super external:imsm
unused devices: <none>
root@u22:~# mdadm --add /dev/md126 /dev/nvme2n1
mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container
root@u22:~# mdadm --manage --add /dev/md/imsm0 /dev/nvme2n1
mdadm: added /dev/nvme2n1
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme2n1[3] nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
[>....................] recovery = 0.0% (639232/1484667904) finish=116.0min speed=213077K/sec
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
11507 blocks super external:imsm
unused devices: <none>
root@u22:~# mdadm --detail /dev/md127
/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 3
Working Devices : 3
UUID : 5c5f48fa:c9774165:32ea31ae:eaa9075e
Member Arrays : /dev/md/Volume0_0
Number Major Minor RaidDevice
- 259 2 - /dev/nvme2n1
- 259 1 - /dev/nvme1n1
- 259 0 - /dev/nvme0n1
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 0% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 3% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme2n1[3] nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
[>....................] recovery = 3.3% (50089648/1484667904) finish=118.5min speed=201661K/sec
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
11507 blocks super external:imsm
unused devices: <none>
```
参考资料:
```
错误信息,mdadm: Cannot add disks to a 'member' array, perform this operation on the parent container 表示你尝试向一个“成员”(member)RAID数组中添加磁盘,而这种操作应该在父级容器(parent container)上进行。
解决方法
确认父级容器
首先,需要确认 /dev/md126 是否是一个嵌套的 RAID 数组。你可以通过以下命令查看 RAID 数组的详细信息:
mdadm --detail /dev/md126
如果 /dev/md126 是一个成员数组,并且属于某个父级容器(例如 /dev/md/imsm0),你需要在父级容器上执行操作。
在父级容器上添加磁盘
如果确认 /dev/md126 是一个成员数组,而 /dev/md/imsm0 是父级容器,你可以尝试将磁盘添加到父级容器中,而不是直接添加到 /dev/md126:
mdadm --add /dev/md/imsm0 /dev/nvme2n1
替换 /dev/md/imsm0 为实际的父级容器设备。
检查 RAID 状态
在操作之前,建议检查当前 RAID 数组的状态,确保没有其他问题。可以使用以下命令:
cat /proc/mdstat
这将显示当前所有 RAID 数组的状态。
备份数据
在进行任何 RAID 操作之前,强烈建议备份重要数据,以防止数据丢失。
注意事项
如果不确定父级容器的名称,可以通过 mdadm --detail --scan 命令查看所有 RAID 数组的详细信息。
如果 RAID 数组处于降级(degraded)状态,可能会导致数据丢失或损坏。在这种情况下,建议先恢复数据。
如果仍然不确定如何操作,可以提供更多 RAID 数组的详细信息,以便进一步分析和解决问题。
```
### 3. 这里的md126和md127是什么关系
```bash
root@u22:~# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINTS
loop0 7:0 0 61.9M 1 loop /snap/core20/1405
loop1 7:1 0 79.9M 1 loop /snap/lxd/22923
loop3 7:3 0 44.4M 1 loop /snap/snapd/23545
loop4 7:4 0 63.7M 1 loop /snap/core20/2434
loop5 7:5 0 89.4M 1 loop /snap/lxd/31333
sda 8:0 0 1.3T 0 disk
sdb 8:16 0 446.6G 0 disk
├─sdb1 8:17 0 100M 0 part
├─sdb5 8:21 0 4G 0 part
├─sdb6 8:22 0 4G 0 part
├─sdb7 8:23 0 119.9G 0 part
└─sdb8 8:24 0 318.6G 0 part
sdc 8:32 0 24.4T 0 disk
└─sdc1 8:33 0 24.4T 0 part
nvme0n1 259:0 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
nvme1n1 259:1 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
nvme2n1 259:2 0 1.5T 0 disk
├─md126 9:126 0 2.8T 0 raid5
│ ├─md126p1 259:3 0 1G 0 part /boot/efi
│ ├─md126p2 259:4 0 2G 0 part /boot
│ └─md126p3 259:5 0 2.8T 0 part
│ └─ubuntu--vg-ubuntu--lv 253:0 0 100G 0 lvm /
└─md127 9:127 0 0B 0 md
```
```bash
lsblk 输出来md126 和 md127 之间的关系是:
关系解释
md126 是一个 RAID 5 数组,由三块 NVMe 磁盘(nvme0n1、nvme1n1 和 nvme2n1)组成。
md127 是一个“容器”(container),它是一个虚拟的 RAID 设备,用于管理 md126。md127 本身没有实际的存储功能,但它包含了 md126 的元数据(metadata)。
具体关系
md126 是 md127 的成员(member),而 md127 是 md126 的父级容器(parent container)。
md127 的存在是为了管理 md126 的元数据,例如 RAID 级别、成员磁盘等信息。
为什么会出现这种结构
这种结构通常是由 Intel 的 Matrix Storage Manager(IMSM)或类似的工具创建的,用于管理复杂的 RAID 配置。在这种配置中:
md127 是一个“超级块”(superblock)容器,它包含了 RAID 数组的元数据。
md126 是实际的 RAID 数组,用于存储数据。
如何操作
如果你需要对 RAID 数组进行操作(例如添加磁盘),你应该在父级容器 md127 上执行操作,而不是直接在 md126 上操作。例如:
mdadm --add /dev/md127 /dev/nvmeXn1
其中 /dev/nvmeXn1 是你想要添加的磁盘设备。
注意事项
在操作之前,请确保你已经备份了重要数据,以防意外。
如果你不确定操作的正确性,可以先查看 md127 的详细信息:
mdadm --detail /dev/md127
这将显示 md127 的成员关系和元数据信息。
```
### 4. vmd 手动标记硬盘故障后,再移出raid,再加回来操作记录
```bash
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean
Active Devices : 3
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 active sync /dev/nvme2n1
root@u22:~# mdadm --manage --set-faulty /dev/md126 /dev/nvme2n1
mdadm: set /dev/nvme2n1 faulty in /dev/md126
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 2
State : active, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
- 0 0 2 removed
root@u22:~# mdadm --detail /dev/md127
/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 3
Working Devices : 3
UUID : 5c5f48fa:c9774165:32ea31ae:eaa9075e
Member Arrays : /dev/md/Volume0_0
Number Major Minor RaidDevice
- 259 2 - /dev/nvme2n1
- 259 1 - /dev/nvme1n1
- 259 0 - /dev/nvme0n1
root@u22:~# mdadm --manage --remove /dev/md126 /dev/nvme2n1
mdadm: Cannot remove disks from a 'member' array, perform this operation on the parent container
root@u22:~# mdadm --manage --remove /dev/md127 /dev/nvme2n1
mdadm: hot removed /dev/nvme2n1 from /dev/md127
root@u22:~# mdadm --detail /dev/md127
/dev/md127:
Version : imsm
Raid Level : container
Total Devices : 2
Working Devices : 2
UUID : 5c5f48fa:c9774165:32ea31ae:eaa9075e
Member Arrays : /dev/md/Volume0_0
Number Major Minor RaidDevice
- 259 1 - /dev/nvme1n1
- 259 0 - /dev/nvme0n1
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 2
State : active, degraded
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
- 0 0 2 removed
root@u22:~# mdadm --manage --add /dev/md127 /dev/nvme2n1
mdadm: added /dev/nvme2n1
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : active, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 0% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
root@u22:~# mdadm --manage --add /dev/md127 /dev/nvme2n1
mdadm: Cannot open /dev/nvme2n1: Device or resource busy
root@u22:~# mdadm --detail /dev/md126
/dev/md126:
Container : /dev/md/imsm0, member 0
Raid Level : raid5
Array Size : 2969335808 (2.77 TiB 3.04 TB)
Used Dev Size : 1484667904 (1415.89 GiB 1520.30 GB)
Raid Devices : 3
Total Devices : 3
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Layout : left-asymmetric
Chunk Size : 128K
Consistency Policy : resync
Rebuild Status : 1% complete
UUID : 7ecc383c:4989b6c0:da67edc3:9ee011c9
Number Major Minor RaidDevice State
2 259 0 0 active sync /dev/nvme0n1
1 259 1 1 active sync /dev/nvme1n1
3 259 2 2 spare rebuilding /dev/nvme2n1
root@u22:~# cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] [raid10]
md126 : active raid5 nvme2n1[3] nvme0n1[2] nvme1n1[1]
2969335808 blocks super external:/md127/0 level 5, 128k chunk, algorithm 0 [3/2] [UU_]
[>....................] recovery = 2.1% (31992832/1484667904) finish=115.6min speed=209330K/sec
md127 : inactive nvme2n1[2](S) nvme1n1[1](S) nvme0n1[0](S)
11507 blocks super external:imsm
unused devices: <none>
```
该案例暂时没有网友评论
✖
案例意见反馈
亲~登录后才可以操作哦!
确定你的邮箱还未认证,请认证邮箱或绑定手机后进行当前操作