Подмяна на хард диск в софтуерен райд 1 в Линукс сървър

Подмяна на повреден диск в софтуерен райд mirror в Линукс сървър.

Когато поддържаме физически сървъри, понякога се налага да заменяме повредени компоненти, в случая подменяме хард диск с лоши показатели ( S.M.A.R.T ) .

Нека видим всички райд масиви които имаме.

Това са райд масивите които имаме на текущият сървър.
Конфигурацията която имаме е, софтуерен райд 1 ( mirror ).


[root@web10 ~]# cat /proc/mdstat
Personalities : [raid1] md3 : active raid1 sdc5[1] 447895552 blocks super 1.1 [2/1] [_U] bitmap: 4/4 pages [16KB], 65536KB chunk

md2 : active raid1 sdc2[1] 8380416 blocks super 1.1 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk

md4 : active raid1 sdb1[2] sdd1[1] 117155200 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sdc3[1] 511936 blocks super 1.0 [2/1] [_U]

md1 : active raid1 sdc1[1] 31440896 blocks super 1.1 [2/1] [_U] bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices:
You have mail in /var/spool/mail/root

Проверяваме кои дялове от дисковете участват в първият масив.

[root@web10 ~]# mdadm -D /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Mon Mar 7 19:14:08 2016
Raid Level : raid1
Array Size : 511936 (499.94 MiB 524.22 MB)
Used Dev Size : 511936 (499.94 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Mon Feb 4 22:53:23 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : web10:0
UUID : 8092249e:ea88c00a:6d312234:1492d59c
Events : 477

Number Major Minor RaidDevice State
0 8 3 0 active sync /dev/sda3
1 8 35 1 active sync /dev/sdc3

Проверяваме кои дялове от дисковете участват във вторият масив.


[root@web10 ~]# mdadm -D /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Mon Mar 7 19:14:09 2016
Raid Level : raid1
Array Size : 31440896 (29.98 GiB 32.20 GB)
Used Dev Size : 31440896 (29.98 GiB 32.20 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 16:01:29 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : web10:1
UUID : 56404d4f:96d31abd:f03a1831:8ca8a4c5
Events : 1472

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 33 1 active sync /dev/sdc1

Проверяваме кои дялове от дисковете участват в третият масив.


[root@web10 ~]# mdadm -D /dev/md2
/dev/md2:
Version : 1.1
Creation Time : Mon Mar 7 19:14:15 2016
Raid Level : raid1
Array Size : 8380416 (7.99 GiB 8.58 GB)
Used Dev Size : 8380416 (7.99 GiB 8.58 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 16:01:36 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : web10:2
UUID : f6b9865e:91c8185c:d7ce89c8:19a0b145
Events : 11105

Number Major Minor RaidDevice State
0 8 2 0 active sync /dev/sda2
1 8 34 1 active sync /dev/sdc2

 

Проверяваме кои дялове от дисковете участват в четвъртият масив.


[root@web10 ~]# mdadm -D /dev/md3
/dev/md3:
Version : 1.1
Creation Time : Mon Mar 7 19:14:19 2016
Raid Level : raid1
Array Size : 447895552 (427.15 GiB 458.65 GB)
Used Dev Size : 447895552 (427.15 GiB 458.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 16:01:38 2019
State : clean
Active Devices : 2
Working Devices : 2
Failed Devices : 0
Spare Devices : 0

Name : web10:3
UUID : 34af4745:0206e030:fe83d8b7:a55d8256
Events : 276861

Number Major Minor RaidDevice State
0 8 5 0 active sync /dev/sda5
1 8 37 1 active sync /dev/sdc5

 

Изкарваме повреденият хард диск от всички райд масиви в които участва.

 


[root@web10 ~]# mdadm --fail /dev/md0 /dev/sda3
mdadm: set /dev/sda3 faulty in /dev/md0
[root@web10 ~]# mdadm --remove /dev/md0 /dev/sda3
mdadm: hot removed /dev/sda3 from /dev/md0


[root@web10 ~]# mdadm --fail /dev/md1 /dev/sda1
mdadm: set /dev/sda1 faulty in /dev/md1
[root@web10 ~]# mdadm --remove /dev/md1 /dev/sda1
mdadm: hot removed /dev/sda1 from /dev/md1


[root@web10 ~]# mdadm --fail /dev/md2 /dev/sda2
mdadm: set /dev/sda2 faulty in /dev/md2
[root@web10 ~]# mdadm --remove /dev/md2 /dev/sda2
mdadm: hot removed /dev/sda2 from /dev/md2


[root@web10 ~]# mdadm --fail /dev/md3 /dev/sda5
mdadm: set /dev/sda5 faulty in /dev/md3
[root@web10 ~]# mdadm --remove /dev/md3 /dev/sda5
mdadm: hot removed /dev/sda5 from /dev/md3

 

Сега вече след като  може да извадим повреденият хард диск и да монтираме новият.

 

Клонираме таблицата на дяловете oт здравият диск към новият.

[root@web10 src]# sfdisk -d /dev/sdc | sfdisk --force /dev/sda


[root@web10 src]# mdadm -D /dev/md0
/dev/md0:
Version : 1.0
Creation Time : Mon Mar 7 19:14:08 2016
Raid Level : raid1
Array Size : 511936 (499.94 MiB 524.22 MB)
Used Dev Size : 511936 (499.94 MiB 524.22 MB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Update Time : Tue Feb 5 16:06:26 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : web10:0
UUID : 8092249e:ea88c00a:6d312234:1492d59c
Events : 480

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 35 1 active sync /dev/sdc3

 

 


[root@web10 src]# mdadm --add /dev/md0 /dev/sda3
mdadm: added /dev/sda3


[root@web10 src]# mdadm -D /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Mon Mar 7 19:14:09 2016
Raid Level : raid1
Array Size : 31440896 (29.98 GiB 32.20 GB)
Used Dev Size : 31440896 (29.98 GiB 32.20 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 17:22:19 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : web10:1
UUID : 56404d4f:96d31abd:f03a1831:8ca8a4c5
Events : 1787

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 33 1 active sync /dev/sdc1

 

 


[root@web10 src]# mdadm --add /dev/md1 /dev/sda1
mdadm: added /dev/sda1


[root@web10 src]# mdadm -D /dev/md1
/dev/md1:
Version : 1.1
Creation Time : Mon Mar 7 19:14:09 2016
Raid Level : raid1
Array Size : 31440896 (29.98 GiB 32.20 GB)
Used Dev Size : 31440896 (29.98 GiB 32.20 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 17:23:59 2019
State : clean, degraded, recovering
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Rebuild Status : 3% complete

Name : web10:1
UUID : 56404d4f:96d31abd:f03a1831:8ca8a4c5
Events : 1792

Number Major Minor RaidDevice State
2 8 1 0 spare rebuilding /dev/sda1
1 8 33 1 active sync /dev/sdc1

 

 


[root@web10 src]# mdadm -D /dev/md2
/dev/md2:
Version : 1.1
Creation Time : Mon Mar 7 19:14:15 2016
Raid Level : raid1
Array Size : 8380416 (7.99 GiB 8.58 GB)
Used Dev Size : 8380416 (7.99 GiB 8.58 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 17:25:53 2019
State : clean, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : web10:2
UUID : f6b9865e:91c8185c:d7ce89c8:19a0b145
Events : 13518

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 34 1 active sync /dev/sdc2

 

 


[root@web10 src]# mdadm --add /dev/md2 /dev/sda2
mdadm: added /dev/sda2


[root@web10 src]# mdadm -D /dev/md2
/dev/md2:
Version : 1.1
Creation Time : Mon Mar 7 19:14:15 2016
Raid Level : raid1
Array Size : 8380416 (7.99 GiB 8.58 GB)
Used Dev Size : 8380416 (7.99 GiB 8.58 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 17:26:50 2019
State : clean, degraded, resyncing (DELAYED)
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Name : web10:2
UUID : f6b9865e:91c8185c:d7ce89c8:19a0b145
Events : 13548

Number Major Minor RaidDevice State
2 8 2 0 spare rebuilding /dev/sda2
1 8 34 1 active sync /dev/sdc2

 

 


[root@web10 src]# mdadm -D /dev/md3
/dev/md3:
Version : 1.1
Creation Time : Mon Mar 7 19:14:19 2016
Raid Level : raid1
Array Size : 447895552 (427.15 GiB 458.65 GB)
Used Dev Size : 447895552 (427.15 GiB 458.65 GB)
Raid Devices : 2
Total Devices : 1
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 17:27:15 2019
State : active, degraded
Active Devices : 1
Working Devices : 1
Failed Devices : 0
Spare Devices : 0

Name : web10:3
UUID : 34af4745:0206e030:fe83d8b7:a55d8256
Events : 280718

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 37 1 active sync /dev/sdc5

 


[root@web10 src]# mdadm --add /dev/md3 /dev/sda5
mdadm: added /dev/sda5


[root@web10 src]# mdadm -D /dev/md3
/dev/md3:
Version : 1.1
Creation Time : Mon Mar 7 19:14:19 2016
Raid Level : raid1
Array Size : 447895552 (427.15 GiB 458.65 GB)
Used Dev Size : 447895552 (427.15 GiB 458.65 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Feb 5 17:28:07 2019
State : clean, degraded, resyncing (DELAYED)
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Name : web10:3
UUID : 34af4745:0206e030:fe83d8b7:a55d8256
Events : 280749

Number Major Minor RaidDevice State
2 8 5 0 spare rebuilding /dev/sda5
1 8 37 1 active sync /dev/sdc5

 

Проверяваме какъв е статуса на синхронизация между дисковете.


[root@web10 src]# cat /proc/mdstat
Personalities : [raid1] md3 : active raid1 sda5[2] sdc5[1] 447895552 blocks super 1.1 [2/1] [_U] resync=DELAYED
bitmap: 4/4 pages [16KB], 65536KB chunk

md2 : active raid1 sda2[2] sdc2[1] 8380416 blocks super 1.1 [2/1] [_U] resync=DELAYED
bitmap: 1/1 pages [4KB], 65536KB chunk

md4 : active raid1 sdb1[2] sdd1[1] 117155200 blocks super 1.2 [2/2] [UU] bitmap: 1/1 pages [4KB], 65536KB chunk

md0 : active raid1 sda3[2] sdc3[1] 511936 blocks super 1.0 [2/2] [UU]

md1 : active raid1 sda1[2] sdc1[1] 31440896 blocks super 1.1 [2/1] [_U] [===========>………] recovery = 55.4% (17436544/31440896) finish=4.2min speed=54570K/sec
bitmap: 1/1 pages [4KB], 65536KB chunk

unused devices:

 

Инсталираме буутлоадера на новият диск.


[root@web10 src]# grub-install --recheck /dev/sda
Probing devices to guess BIOS drives. This may take a long time.
Installation finished. No error reported.
This is the contents of the device map /boot/grub/device.map.
Check if this is correct or not. If any of the lines is incorrect,
fix it and re-run the script `grub-install'.

(fd0) /dev/fd0
(hd0) /dev/sda
(hd1) /dev/sdb
(hd2) /dev/sdc
(hd3) /dev/sdd

 

 

Проверяваме дали имаме буутлоадера на двата диска.


[root@web10 ~]# grub
grub> find /grub/grub.conf
find /grub/grub.conf
(hd0,2)
(hd2,2)
grub>

Свързани статии:

Какво-е-raid-и-как-да-го-използваме ?
How to grow up / extend linux software raid array ?

Заявете безплатна оферта

Ние предлагаме професионални услуги в сферата на ..

Още от нашия блог

Всички постове