how to grow up / expand linux software raid array

  • февруари 16, 2017
  • linux

В тази статия ще покажем, как да разширим съществуващ софтуерен райд масив. Нека покажем текущата схема на инсталация на този линукс сървър. Сървъра който има нужда от тази манипулация е с Линукс CentOS 7.3 .

[root@backup2 ~]# cat /etc/redhat-release

CentOS Linux release 7.3.1611 (Core)

[root@backup2 ~]# df -h

Filesystem Size Used Avail Use% Mounted on
/dev/md2 2.0T 1.2G 1.9T 1% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 41M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 488M 123M 339M 27% /boot
/dev/md3 1.7T 69M 1.6T 1% /home
tmpfs 6.3G 0 6.3G 0% /run/user/0
[root@backup2 ~]#

Нека видим всички райд масиви, които имаме.

[root@backup2 ~]# cat /proc/mdstat

Personalities : [raid1] md3 : active raid1 sdb4[0] sda4[1] 1760974656 blocks super 1.2 [2/2] [UU] bitmap: 0/14 pages [0KB], 65536KB chunk

md2 : active raid1 sdb3[0] sda3[1] 2111700992 blocks super 1.2 [2/2] [UU] bitmap: 1/16 pages [4KB], 65536KB chunk

md0 : active raid1 sdb1[0] sda1[1] 33521664 blocks super 1.2 [2/2] [UU]

md1 : active raid1 sdb2[0] sda2[1] 523712 blocks super 1.2 [2/2] [UU]

unused devices:

 

Дисковото пространство е разпределено така:

/dev/md0 swap дял 34.33GB
/dev/md1 boot дял 536.28MB
/dev/md2 root дял 2162.38GB
/dev/md3 home директория 1803.24GB

За безопасна работа трябва файловите система да бъдат демонтирани, може да влезем в „rescue mode“ , един от вариантите е да използваме live cd.

Нека видим райд масивите:

root@rescue ~ # cat /proc/mdstat
Personalities : [raid1] md3 : active raid1 sdb4[0] sda4[1] 1760974656 blocks super 1.2 [2/2] [UU] bitmap: 0/14 pages [0KB], 65536KB chunk

md2 : active raid1 sdb3[0] sda3[1] 2111700992 blocks super 1.2 [2/2] [UU] bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sdb2[0] sda2[1] 523712 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sdb1[0] sda1[1] 33521664 blocks super 1.2 [2/2] [UU]

unused devices:

Нека да видим първият райд масив:

root@rescue ~ # mdadm -D /dev/md0
/dev/md0:
Version : 1.2
Creation Time : Tue Feb 14 10:35:13 2017
Raid Level : raid1
Array Size : 33521664 (31.97 GiB 34.33 GB)
Used Dev Size : 33521664 (31.97 GiB 34.33 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Tue Feb 14 17:58:59 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : rescue:0 (local to host rescue)
UUID : a916dfdd:13d7d97e:3212aa86:0e9e730c
Events : 22

Number Major Minor RaidDevice State
0 8 17 0 active sync /dev/sdb1
1 8 1 1 active sync /dev/sda1

Спираме и махаме райд масив md0

root@rescue ~ # mdadm --remove /dev/md0
root@rescue ~ # mdadm --stop /dev/md0
mdadm: stopped /dev/md0

Сега ще спрем и премахнем райд масив /dev/md3 който се ползва за /home директорията.
Тази реорганизация на масивите които правим е след чиста линукс инсталация, така че няма файлове които трябва да бъдат прехвърлени от този дял.

root@rescue ~ # mdadm --remove /dev/md3
root@rescue ~ # mdadm --stop /dev/md3
mdadm: stopped /dev/md3

Нека проверим отново райд устройствата

root@rescue ~ # cat /proc/mdstat
Personalities : [raid1] md2 : active raid1 sdb3[0] sda3[1] 2111700992 blocks super 1.2 [2/2] [UU] bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sdb2[0] sda2[1] 523712 blocks super 1.2 [2/2] [UU]

unused devices:

Както казахме /dev/md1 се ползва за /boot дяла .
Нека изкараме вторият диск /dev/sdb от масива.

root@rescue ~ # mdadm --fail /dev/md1 /dev/sdb2
mdadm: set /dev/sdb2 faulty in /dev/md1
root@rescue ~ # mdadm --remove /dev/md1 /dev/sdb2
mdadm: hot removed /dev/sdb2 from /dev/md1

Правим същата процедура за райд масив /dev/md2 .

root@rescue ~ # mdadm --fail /dev/md2 /dev/sdb3
mdadm: set /dev/sdb3 faulty in /dev/md2
root@rescue ~ # mdadm --remove /dev/md2 /dev/sdb3
mdadm: hot removed /dev/sdb3 from /dev/md2

Нека изчистим метадатата от диск /dev/sdb която e останала по дяловете от райд устройствата.

root@rescue ~ # mdadm --zero-superblock /dev/sdb1
root@rescue ~ # mdadm --zero-superblock /dev/sdb2
root@rescue ~ # mdadm --zero-superblock /dev/sdb3
root@rescue ~ # mdadm --zero-superblock /dev/sdb4

Чрез инструмента parted ще направим новите дялове на /dev/sdb.
root@rescue ~ # parted /dev/sdb

GNU Parted 3.2
Using /dev/sdb
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
5 1049kB 2097kB 1049kB bios_grub
1 2097kB 34.4GB 34.4GB raid
2 34.4GB 34.9GB 537MB raid
3 34.9GB 2197GB 2163GB raid
4 2197GB 4001GB 1803GB raid

(parted) rm 1
(parted) rm 2
(parted) rm 3
(parted) rm 4
(parted) rm 5
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

В момента нямаме дялове, затрихме всички налични.

Създаваме bios_grub дял:

Number Start End Size File system Name Flags

(parted) mkpart primary 1 3
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
1 1049kB 3146kB 2097kB primary

(parted) set 1 bios_grub on
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
1 1049kB 3146kB 2097kB primary bios_grub

 

Сега създаваме партишън за /boot дяла.

(parted) mkpart primary 3 600
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4000787MB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
1 1.05MB 3.15MB 2.10MB primary bios_grub
2 3.15MB 600MB 597MB primary

(parted) set 2 raid on

Трябва да направим и трети партишън, за root дяла.

(parted) mkpart primary 600 100%
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4000787MB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
1 1.05MB 3.15MB 2.10MB primary bios_grub
2 3.15MB 600MB 597MB primary raid
3 600MB 4000786MB 4000186MB primary

(parted) set 3 raid on

Така изглежда схемата на дялове на /dev/sdb :

(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sdb: 4000787MB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
1 1.05MB 3.15MB 2.10MB primary bios_grub
2 3.15MB 600MB 597MB primary raid
3 600MB 4000786MB 4000186MB primary raid

Може да видим дяловете и с fdisk инструмента.

root@rescue ~ # fdisk -l /dev/sdb

Disk /dev/sdb: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 5A34E084-A683-468A-8DDB-48E4A8063963

Device Start End Sectors Size Type
/dev/sdb1 2048 6143 4096 2M BIOS boot
/dev/sdb2 6144 1171455 1165312 569M Linux RAID
/dev/sdb3 1171456 7814035455 7812864000 3.7T Linux RAID

Проверка на райд масивите:

root@rescue ~ # cat /proc/mdstat
Personalities : [raid1] md2 : active raid1 sda3[1] 2111700992 blocks super 1.2 [2/1] [_U] bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sda2[1] 523712 blocks super 1.2 [2/1] [_U]

Добавяме /dev/sdb2 в /dev/md1 масива.

root@rescue ~ # mdadm --add /dev/md1 /dev/sdb2
mdadm: added /dev/sdb2
root@rescue ~ #

Масива се синхронизира бързо.

root@rescue ~ # mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Feb 14 10:35:13 2017
Raid Level : raid1
Array Size : 523712 (511.52 MiB 536.28 MB)
Used Dev Size : 523712 (511.52 MiB 536.28 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Wed Feb 15 18:58:01 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : rescue:1 (local to host rescue)
UUID : 72be8361:f7a22571:fccd0117:a182ca55
Events : 45

Number Major Minor RaidDevice State
2 8 18 0 active sync /dev/sdb2
1 8 2 1 active sync /dev/sda2

 

root@rescue ~ # mdadm -D /dev/md2

/dev/md2:
Version : 1.2
Creation Time : Tue Feb 14 10:35:13 2017
Raid Level : raid1
Array Size : 2111700992 (2013.88 GiB 2162.38 GB)
Used Dev Size : 2111700992 (2013.88 GiB 2162.38 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Feb 15 18:43:39 2017
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : rescue:2 (local to host rescue)
UUID : f2692b79:0f4083e5:19053456:dfc8d883
Events : 2574

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 3 1 active sync /dev/sda3

Добавяме /dev/sdb3 към /dev/md2 .

root@rescue ~ # mdadm --add /dev/md2 /dev/sdb3
mdadm: added /dev/sdb3
root@rescue ~ #

Нека проверим статуса на ребилдване на райд масива:

root@rescue ~ # mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Tue Feb 14 10:35:13 2017
Raid Level : raid1
Array Size : 2111700992 (2013.88 GiB 2162.38 GB)
Used Dev Size : 2111700992 (2013.88 GiB 2162.38 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Wed Feb 15 19:01:18 2017
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 1% complete

Name : rescue:2 (local to host rescue)
UUID : f2692b79:0f4083e5:19053456:dfc8d883
Events : 2601

Number Major Minor RaidDevice State
2 8 19 0 spare rebuilding /dev/sdb3
1 8 3 1 active sync /dev/sda3

Тъй като масива е доста голям, ребилдването ще стане бавно.
Нека да проверим текущите скорости зададени в линукс кернела:

root@rescue ~ # sysctl dev.raid.speed_limit_max
dev.raid.speed_limit_max = 200000
root@rescue ~ # sysctl dev.raid.speed_limit_min
dev.raid.speed_limit_min = 1000

dev.raid.speed_limit_min относно този параметър – това е минимална гарантирана скорост на ребилдване на райд масива.

Увеличаваме минималната скорост на ребилдване

root@rescue ~ # echo 100000 > /proc/sys/dev/raid/speed_limit_min
root@rescue ~ # sysctl dev.raid.speed_limit_min
dev.raid.speed_limit_min = 100000

Проверяваме скоростта на ресинкване:
root@rescue ~ # cat /proc/mdstat

Personalities : [raid1] md2 : active raid1 sdb3[2] sda3[1] 2111700992 blocks super 1.2 [2/1] [_U] [=>...................] recovery = 5.9% (126137088/2111700992) finish=164.7min speed=200924K/sec
bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sdb2[2] sda2[1] 523712 blocks super 1.2 [2/2] [UU]

unused devices:

Ще вдигнем лимита на горнита граница на ресинкване

root@rescue ~ # echo 250000 > /proc/sys/dev/raid/speed_limit_max


Personalities : [raid1] md2 : active raid1 sdb3[2] sda3[1] 2111700992 blocks super 1.2 [2/1] [_U] [=>...................] recovery = 9.4% (199475200/2111700992) finish=146.7min speed=217149K/sec
bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sdb2[2] sda2[1] 523712 blocks super 1.2 [2/2] [UU]

unused devices:

software raid rebuilding disk usage

software raid rebuilding disk usage


root@rescue ~ # cat /proc/mdstat
Personalities : [raid1] md2 : active raid1 sdb3[2] sda3[1] 2111700992 blocks super 1.2 [2/2] [UU] bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sdb2[2] sda2[1] 523712 blocks super 1.2 [2/2] [UU]

unused devices:

След приключването на ребилдването, трябва да извадим диск /dev/sda от масива.


root@rescue ~ # mdadm --fail /dev/md1 /dev/sda2
mdadm: set /dev/sda2 faulty in /dev/md1
root@rescue ~ # mdadm --remove /dev/md1 /dev/sda2
mdadm: hot removed /dev/sda2 from /dev/md1


root@rescue ~ # mdadm --fail /dev/md2 /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md2
root@rescue ~ # mdadm --remove /dev/md2 /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md2

Зачистваме метадатата.

root@rescue ~ # mdadm --zero-superblock /dev/sda1
root@rescue ~ # mdadm --zero-superblock /dev/sda2
root@rescue ~ # mdadm --zero-superblock /dev/sda3
root@rescue ~ # mdadm --zero-superblock /dev/sda4

Копираме таблицата с дялове от /dev/sdb на /dev/sda .

root@rescue ~ # sgdisk --backup=table-sdb /dev/sdb
The operation has completed successfully.
root@rescue ~ # sgdisk --load-backup=table-sdb /dev/sda
The operation has completed successfully.
root@rescue ~ # sgdisk -G /dev/sda
The operation has completed successfully.

root@rescue ~ # fdisk -l /dev/sda

Disk /dev/sda: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: CF967025-E324-401D-BBC7-B8D336AB475C

Device Start End Sectors Size Type
/dev/sda1 2048 6143 4096 2M BIOS boot
/dev/sda2 6144 1171455 1165312 569M Linux RAID
/dev/sda3 1171456 7814035455 7812864000 3.7T Linux RAID


root@rescue ~ # parted /dev/sda
GNU Parted 3.2
Using /dev/sda
Welcome to GNU Parted! Type 'help' to view a list of commands.
(parted) print
Model: ATA ST4000NM0024-1HT (scsi)
Disk /dev/sda: 4001GB
Sector size (logical/physical): 512B/4096B
Partition Table: gpt
Disk Flags: pmbr_boot

Number Start End Size File system Name Flags
1 1049kB 3146kB 2097kB primary bios_grub
2 3146kB 600MB 597MB primary raid
3 600MB 4001GB 4000GB primary raid

(parted) quit
root@rescue ~ #

Сега трябва да добавим /dev/sda към масива.

root@rescue ~ # mdadm --add /dev/md1 /dev/sda2
mdadm: added /dev/sda2

root@rescue ~ # mdadm –add /dev/md2 /dev/sda3
mdadm: added /dev/sda3

Проверяваме дали е започнало ребилдването

root@rescue ~ # cat /proc/mdstat
Personalities : [raid1] md2 : active raid1 sda3[3] sdb3[2] 2111700992 blocks super 1.2 [2/1] [U_] [>....................] recovery = 0.1% (2792192/2111700992) finish=163.6min speed=214784K/sec
bitmap: 0/16 pages [0KB], 65536KB chunk

md1 : active raid1 sda2[3] sdb2[2] 523712 blocks super 1.2 [2/2] [UU]

unused devices:

След приключването на ребилдването на райд масивите, увеличаваме техният размер.

root@rescue ~ # mdadm --grow /dev/md1 --size=max
mdadm: component size of /dev/md1 has been set to 582128K
unfreeze
root@rescue ~ # mdadm --grow /dev/md2 --size=max
mdadm: component size of /dev/md2 has been set to 3906300928K
unfreeze

Правим проверка на файловата система.

root@rescue ~ # e2fsck -f /dev/md1
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md1: 337/131072 files (23.7% non-contiguous), 150553/523712 blocks

Правим „resize“ на първият масив.

root@rescue ~ # resize2fs /dev/md1
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/md1 to 582128 (1k) blocks.
The filesystem on /dev/md1 is now 582128 (1k) blocks long.

Отново правим проверка на файловата система.

root@rescue ~ # e2fsck -f /dev/md2
e2fsck 1.42.12 (29-Aug-2014)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/md2: 31909/131981312 files (0.2% non-contiguous), 8612991/527925248 blocks

Правим „resize“ на вторият масив.

root@rescue ~ # resize2fs /dev/md2
resize2fs 1.42.12 (29-Aug-2014)
Resizing the filesystem on /dev/md2 to 976575232 (4k) blocks.
The filesystem on /dev/md2 is now 976575232 (4k) blocks long.

Нека видим в детайли размера на първият райд масив.

root@rescue ~ # mdadm -D /dev/md1
/dev/md1:
Version : 1.2
Creation Time : Tue Feb 14 10:35:13 2017
Raid Level : raid1
Array Size : 582128 (568.58 MiB 596.10 MB)
Used Dev Size : 582128 (568.58 MiB 596.10 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Thu Feb 16 08:26:39 2017
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : rescue:1 (local to host rescue)
UUID : 72be8361:f7a22571:fccd0117:a182ca55
Events : 71

Number Major Minor RaidDevice State
2 8 18 0 active sync /dev/sdb2
3 8 2 1 active sync /dev/sda2

Детайлно разглеждаме и вторият райд масив.

root@rescue ~ # mdadm -D /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Tue Feb 14 10:35:13 2017
Raid Level : raid1
Array Size : 3906300928 (3725.34 GiB 4000.05 GB)
Used Dev Size : 3906300928 (3725.34 GiB 4000.05 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Thu Feb 16 08:41:28 2017
State : active, resyncing
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Resync Status : 59% complete

Name : rescue:2 (local to host rescue)
UUID : f2692b79:0f4083e5:19053456:dfc8d883
Events : 7159

Number Major Minor RaidDevice State
2 8 19 0 active sync /dev/sdb3
3 8 3 1 active sync /dev/sda3

 

Всичко изглежда наред. Сега трябва да инсталираме буутлоадера.


root@rescue ~ # mkdir /mnt/root
root@rescue ~ # mount /dev/md2 /mnt/root/
root@rescue ~ # mount /dev/md1 /mnt/root/boot/
root@rescue ~ # mount -t proc proc /mnt/root/proc/
root@rescue ~ # mount -t sysfs sys /mnt/root/sys
root@rescue ~ # mount -o bind /dev/ /mnt/root/dev
root@rescue ~ # chroot /mnt/root/

root@rescue / # grub2-install /dev/sda
Installing for i386-pc platform.
Installation finished. No error reported.

root@rescue / # grub2-install /dev/sdb
Installing for i386-pc platform.
Installation finished. No error reported.

Сега трябва да променим нашите устройства в fstab файла.

vim /mnt/root/etc/fstab


proc /proc proc defaults 0 0
devpts /dev/pts devpts gid=5,mode=620 0 0
tmpfs /dev/shm tmpfs defaults 0 0
sysfs /sys sysfs defaults 0 0
#/dev/md/0 none swap sw 0 0
#/dev/md/1 /boot ext3 defaults 0 0
#/dev/md/2 / ext4 defaults 0 0
#/dev/md/3 /home ext4 defaults 0 0

/dev/md1 /boot ext3 defaults 0 0
/dev/md2 / ext4 defaults 0 0

Рестартираме сървъра и излизаме от rescue режим.
След което виждаме желаният резултат, по-голям масив за дяла на операционната система.

[root@backup2 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md2 3.6T 1.2G 3.4T 1% /
devtmpfs 32G 0 32G 0% /dev
tmpfs 32G 0 32G 0% /dev/shm
tmpfs 32G 17M 32G 1% /run
tmpfs 32G 0 32G 0% /sys/fs/cgroup
/dev/md1 543M 123M 392M 24% /boot
tmpfs 6.3G 0 6.3G 0% /run/user/0

В случай на необходимост и нужда от поддръжка на сървъри, винаги може да се обърнете към нас за въпроси.

Свързани статии:

Подмяна на хард диск в софтуерен райд 1 в Линукс сървър
Какво е RAID и как да го използваме ?

Заявете безплатна оферта

Ние предлагаме професионални услуги в сферата на ..

Още от нашия блог

Всички постове