Debian – Delete dead HD in RAID1?

I am in debian, raid1, and one of the drives seems to be dead.

root@rescue ~ # cat /proc/ mdstat
Personalities: [raid1]
md1: active raid1 sda2[0]
486279424 blocks [2/1] [U_]

md0: active raid1 sda1[ 0] sdb1[1]
2104448 blocks [2/2] [UU]

unused devices:
root@rescue ~ #

Is it possible to use only healthy hard drives? Do I need to delete the raid? If so, how?
Thank you!

It seems that /dev/sdb is not completely dead, but there may be some
intervals Sexual failure or some bad blocks. You may fail and
use the current disk to re-add the partition to the mirror
There is a problem.

The method is as follows:

mdadm --remove /dev/md1 /dev/sdb2

(It may complain that /dev/sdb2 is not attached, that’s fine)

mdadm --add /dev/md1 /dev/sdb2

Then make one:

cat / proc / mdstat

and You can observe its reconstruction, including the estimated time it will take.

See if it works. If not (/dev/sdb2 is really damaged), you need to
to be on all mirrors To make the drive fail, please delete sdb, add the same content
size drive, partition new drive, and add
partition back to the mirror. If you are not sure which drive
is sdb, try this:
p>

dd if=/dev/sdb of=/dev/null count=40000

Assuming that there is an LED indicator on the front of your server The green light that is stable during the disk dump above will be the drive sdb. (Or you can flip this logic and make sda ​​glow green to instruct the drive not to delete). The Control-C dd command is safe after you figure out which disk Anytime after which. The dd command just reads a stream from the disk and ignores it-it doesn’t cause anything to be written there, unless you get if=and=mixed.

I am in debian, raid1, and one of the drives seems to be dead.

root@rescue ~ # cat /proc/mdstat
Personalities: [raid1]
md1: active raid1 sda2[0]
486279424 blocks [2/1] [U_]

md0: active raid1 s da1[0] sdb1[1]
2104448 blocks [2/2] [UU]

unused devices:
root@rescue ~ #

< p>Is it possible to use only healthy hard drives? Do I need to delete the raid? If so, how?
Thank you!

It looks like /dev/sdb is not completely dead yet, but there may be some
intermittent failures or some bad blocks. You may fail and
Use the current disk to re-add the partition to the mirror
There is a problem.

The method is as follows:

mdadm --remove /dev/md1 /dev/sdb2

(It may complain that /dev/sdb2 is not attached, that’s fine)

mdadm --add / dev/md1 /dev/sdb2

Then make one:

cat / proc / mdstat

And you can observe its reconstruction, including estimated it will cost Time.

See if it works. If not (/dev/sdb2 is really damaged), you need to
to make the drive fail on all mirrors, please delete sdb and add the same content
Size drive, partition new drive, and add
partition back to the mirror. If you are not sure which drive
is sdb, try this:

 dd if=/dev/sdb of=/dev/null count=40000

Assuming that there is an LED indicator disk activity on the front of your server, a glowing green light will be stable during the disk dump above. Drive sdb. (or you can flip this logic and make sda ​​glow green to instruct the drive not to delete). The Control-C dd command is safe at any time after you figure out which disk is which. The dd command is just from the disk Read a stream and ignore it-it didn’t cause anything to be written there, unless you get if = and = mixed.

Leave a Comment

Your email address will not be published.