Partition downgrade in the RAID5 system

I have a server running Debian Squeeze and a 3x 500 GB drive RAID5 system. I have not set it up myself. At startup, the state of a partition in the RAID array seems to be very bad. < p>

md: bind
md: bind
md: bind
md: kicking non -fresh sda2 from array!
md: unbind
md: export_rdev(sda2)
raid5: device sdb2 operational as raid disk 1
raid5: device sdc2 operational as raid disk 2
raid5: allocated 3179kB for md1
1: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
2: w= 2 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
raid5: raid level 5 set md1 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout :
--- rd:3 wd:2
disk 1, o:1, dev:sdb2
disk 2, o:1, dev:sdc2
md1: detected capacity change from 0 to 980206485504
md1: unknown partition table

mdstat also tells me missing partitions:

Personalities: [raid1] [raid6] [raid5] [raid4] 
md1: active raid5 sdb2[1] sdc2[2]
957232896 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

m d0: active raid1 sda1[0] sdc1[2](S) sdb1[1]
9767424 blocks [2/2] [UU]

When running sudo mdadm -D, the partition is displayed as Deleted, and the array has been downgraded.

/dev/md1:
Version: 0.90
Creation Time: Mon Jun 30 00:09:01 2008
Raid Level: raid5
Array Size: 957232896 (912.89 GiB 980.21 GB)
Used Dev Size: 478616448 (456.44 GiB 490.10 GB)
Raid Devices: 3
Total Devices: 2
Preferred Minor: 1
Persistence: Superblock is persistent

Update Time: Thu Aug 11 16:58:50 2011
State: clean, degraded< br /> Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0

Layout: left-symmetric
Chunk Size : 64K

UUID: 03205c1c:cef34d5c:5f1c2cc0:8830ac2b
Events: 0.275646

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2

/dev/md0:
Version: 0.90
Creation Time: Mon Jun 30 00:08:50 2008
Raid Level: raid1
Array Size: 9767424 (9.31 GiB 10.00 GB)
Used Dev Size: 9767424 (9.31 GiB 10.00 GB)
Raid Devices : 2
Total Devices: 3
Preferred Minor: 0
Persistence: Superblock is persistent

Update Time: Thu Aug 11 17:21:20 2011
State: active
Active Devices: 2
Working Devices: 3
Failed Devices: 0
Spare Devices: 1

UUID: f824746f:143df641: 374de2f8:2f9d2e62
Events: 0.93

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/ sdb1

2 8 33-spare /dev/sdc1

But md0 seems to be okay. So, what does all this tell me? Even if md0 works, is the disk faulty? If not, can I re-add /dev/sda2 to the md1 array to solve the problem?

The R in RAID stands for redundancy.

RAID 5 It is N 1 redundancy: if you lose one disk, then you are N-as long as you do not lose another disk, the system will remain in good working order. If you lose the second disk, you are now at N-1 and you The universe broke down (or at least you lost a lot of data).

As SvenW said, replace the disk as possible (follow the release instructions to replace the disk in the md RAID array, for God For this reason, please make sure to replace the correct disk! Pulling out one of the active disks will mess up your day. )
Please also note that when you replace the disk in RAID 5, the new disk is rebuilt (the old disk). There are a lot of reads on the new disk, and a lot of writes on the new disk), so a lot of disk activity will be generated. This has two main meanings:

>During the rebuild, your system will be slow.
How slow depends on your disk and disk I/O subsystem.
>During/shortly after rebuilding, you may lose another disk.
(All disk I/O sometimes goes from the controller Another drive that declares "bad" triggers enough errors).

As there are more disks in the array, the probability of #2 increases and follows the standard "bathtub curve" of hard drive mortality This is part of the reason why you should make a backup, and one of the reasons why you hear the "RAID is not a backup" mantra that is often repeated on ServerFault.

I have one A server running Debian Squeeze and a RAID5 system with a 3x 500 GB drive. I didn’t set it up myself. At startup, the state of a partition in the RAID array seems to be very bad.

< pre>md: bind
md: bind
md: bind
md: kicking non-fresh sda2 from array!
md: unbind< sda2>
md: export_rdev(sda2)
raid5: device sdb2 operational as raid disk 1
raid5: device sdc2 operational as raid disk 2
raid5: allocated 3179kB for m d1
1: w=1 pa=0 pr=3 m=1 a=2 r=3 op1=0 op2=0
2: w=2 pa=0 pr=3 m=1 a =2 r=3 op1=0 op2=0
raid5: raid level 5 set md1 active with 2 out of 3 devices, algorithm 2
RAID5 conf printout:
--- rd:3 wd:2
disk 1, o:1, dev:sdb2
disk 2, o:1, dev:sdc2
md1: detected capacity change from 0 to 980206485504
md1: unknown partition table

mdstat also tells me missing partitions:

Personalities: [raid1] [raid6] [raid5] [raid4] 
md1 : active raid5 sdb2[1] sdc2[2]
957232896 blocks level 5, 64k chunk, algorithm 2 [3/2] [_UU]

md0: active raid1 sda1[0] sdc1 [2](S) sdb1[1]
9767424 blocks [2/2] [UU]

When running sudo mdadm -D, the partition is displayed as deleted and the array is degraded.< /p>

/dev/md1:
Version: 0.90
Creation Time: Mon Jun 30 00:09:01 2008
Raid Level: raid5< br /> Array Size: 957232896 (912.89 GiB 980.21 GB)
Used Dev Size: 478616448 (456.44 GiB 490.10 GB)
Raid Devices: 3
Total Devices: 2
Preferred Minor: 1
Persistence: Superblock is persistent

Update Time: Thu Aug 11 16:58:50 2011
State: clean, degraded
Active Devices: 2
Working Devices: 2
Failed Devices: 0
Spare Devices: 0

Layout: left-symmetric
Chunk Size: 64K

UUID: 03205c1c:cef34d5c:5f1c2cc0:8830ac2b
Events: 0.275646

Number Major Minor RaidDevice State
0 0 0 0 removed
1 8 18 1 active sync /dev/sdb2
2 8 34 2 active sync /dev/sdc2

/dev/md0:
Version: 0.90
Creation Time : Mon Jun 30 00:08:50 2008
Raid Level: raid1
Array Size: 9767424 (9.31 GiB 10.00 GB)
Used Dev Size: 9767424 (9.31 GiB 10.00 GB)
Raid Devices: 2
Total Devices: 3
Preferred Minor: 0
Pers istence: Superblock is persistent

Update Time: Thu Aug 11 17:21:20 2011
State: active
Active Devices: 2
Working Devices: 3
Failed Devices: 0
Spare Devices: 1

UUID: f824746f:143df641:374de2f8:2f9d2e62
Events: 0.93

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1

2 8 33-spare /dev/sdc1

But md0 seems to be fine. So, what does all this tell me? Even if md0 works, is the disk faulty? If not, can I re-add /dev/sda2 to the md1 array to solve the problem?

The R in RAID stands for redundancy.

RAID 5 is N 1 redundancy: if one disk is lost, then You are N-as long as you don’t lose another disk, the system will remain in good working order. If you lose the second disk, you are now at N-1 and your universe has collapsed (or at least you have lost a lot of data) ).

As SvenW said, replace the disk as possible (follow the release instructions to replace the disk in the md RAID array, for God’s sake, make sure to replace the correct disk! Pull out One of the active disks will mess up your day. )
Please also note that when you replace the disk in RAID 5, due to the new disk being rebuilt (a lot of reads on the old disk and a lot of writes on the new disk) Import), so it will generate a lot of disk activity. This has two main meanings:

>During reconstruction, your system will be very slow.
How slow depends on your disk and disk I /O subsystem.
>During/shortly after rebuilding, you may lose another disk.
(All disk I/O sometimes triggers enough errors from another drive where the controller declares “bad” ).

Since there are more disks in the array, the probability of #2 will increase and follow the standard “bathtub curve” of hard drive mortality. This is part of the reason why you should take a backup, and also One of the reasons why you hear the “RAID is not a backup” mantra that is often repeated on ServerFault.

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 1417 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.