Saturday 17 February 2018

linux - Simple mdadm RAID 1 not activating spare


I had created two 2TB HDD partitions (/dev/sdb1 and /dev/sdc1) in a RAID 1 array called /dev/md0 using mdadm on Ubuntu 12.04 LTS Precise Pangolin.


The command sudo mdadm --detail /dev/md0 used to indicate both drives as active sync.


Then, for testing, I failed /dev/sdb1, removed it, then added it again with the command sudo mdadm /dev/md0 --add /dev/sdb1


watch cat /proc/mdstat showed a progress bar of the array rebuilding, but I wouldn't spend hours watching it, so I assumed that the software knew what it was doing.


After the progress bar was no longer showing, cat /proc/mdstat displays:


md0 : active raid1 sdb1[2](S) sdc1[1]
1953511288 blocks super 1.2 [2/1] [U_]

And sudo mdadm --detail /dev/md0 shows:


/dev/md0:
Version : 1.2
Creation Time : Sun May 27 11:26:05 2012
Raid Level : raid1
Array Size : 1953511288 (1863.01 GiB 2000.40 GB)
Used Dev Size : 1953511288 (1863.01 GiB 2000.40 GB)
Raid Devices : 2
Total Devices : 2
Persistence : Superblock is persistent

Update Time : Mon May 28 11:16:49 2012
State : clean, degraded
Active Devices : 1
Working Devices : 2
Failed Devices : 0
Spare Devices : 1

Name : Deltique:0 (local to host Deltique)
UUID : 49733c26:dd5f67b5:13741fb7:c568bd04
Events : 32365

Number Major Minor RaidDevice State
1 8 33 0 active sync /dev/sdc1
1 0 0 1 removed

2 8 17 - spare /dev/sdb1

I've been told that mdadm automatically replaces removed drives with spares, but /dev/sdb1 isn't being moved into the expected position, RaidDevice 1.




UPDATE (30 May 2012): A badblocks destructive read-write test of the entire /dev/sdb yielded no errors as expected; both HDDs are new.


As of the latest edit, I assembled the array with this command:


sudo mdadm --assemble --force --no-degraded /dev/md0 /dev/sdb1 /dev/sdc1

The output was:


mdadm: /dev/md0 has been started with 1 drive (out of 2) and 1 rebuilding.

Rebuilding looks like it's progressing normally:


md0 : active raid1 sdc1[1] sdb1[2]
1953511288 blocks super 1.2 [2/1] [U_]
[>....................] recovery = 0.6% (13261504/1953511288) finish=2299.7min speed=14060K/sec

unused devices:

I'm now waiting on this rebuild, but I'm expecting /dev/sdb1 to become a spare just like the five or six times that I've tried rebuilding before.




UPDATE (31 May 2012): Yeah, it's still a spare. Ugh!




UPDATE (01 June 2012): I'm trying Adrian Kelly's suggested command:


sudo mdadm --assemble --update=resync /dev/md0 /dev/sdb1 /dev/sdc1

Waiting on the rebuild now...




UPDATE (02 June 2012): Nope, still a spare...




UPDATE (04 June 2012): P.B. brought up a concern that I overlooked: perhaps /dev/sdc1 is encountering I/O errors. I hadn't bothered to check /dev/sdc1 because it appeared to be working just fine and it was brand new, but I/O errors towards the end of the drive is a rational possibility.


I bought these HDDs on sale, so it would be no surprise that one of them is already failing. Plus, neither of them have support for S.M.A.R.T., so no wonder they were so cheap...


Here is the data recovery procedure I just made up and am following:



  1. sudo mdadm /dev/md0 --fail /dev/sdb1 so that I can take out /dev/sdb1.

  2. sudo mdadm /dev/md0 --remove /dev/sdb1 to remove /dev/sdb1 from the array.

  3. /dev/sdc1 is mounted at /media/DtkBk

  4. Format /dev/sdb1 as ext4.

  5. Mount /dev/sdb1 to /media/DtkBkTemp.

  6. cd /media to work in that area.

  7. sudo chown deltik DtkBkTemp to give me (username deltik) rights to the partition.

  8. Do copy of all files and directories: sudo rsync -avzHXShP DtkBk/* DtkBkTemp




UPDATE (06 June 2012): I did a badblocks destructive write-mode test of /dev/sdc, following the following procedures:



  1. sudo umount /media/DtkBk to allow tearing down of the array.

  2. sudo mdadm --stop /dev/md0 to stop the array.

  3. sudo badblocks -w -p 1 /dev/sdc -s -v to wipe the suspect hard drive, and in the process, check for I/O errors. If there are I/O errors, that is not a good sign. Hopefully, I can get a refund...


I have now confirmed that there are no input/output issues on either HDD.


From all this investigating, my two original questions still stand.




My questions are:



  1. Why isn't the spare drive becoming active sync?

  2. How can I make the spare drive become active?



Answer



Doing this simply chucks the drive into the array without actually doing anything with it, i.e. it is a member of the array but not active in it. By default, this turns it into a spare:


sudo mdadm /dev/md0 --add /dev/sdb1

If you have a spare, you can grow it by forcing the active drive count for the array to grow. With 3 drives and 2 expected to be active, you would need to increase the active count to 3.


mdadm --grow /dev/md0 --raid-devices=3

The raid array driver will notice that you are "short" a drive, and then look for a spare. Finding the spare, it will integrate it into the array as an active drive. Open a spare terminal and let this rather crude command line run in it, to keep tabs on the re-sync progress. Be sure to type it as one line or use the line break (\) character, and once the rebuild finishes, just type Ctrl-C in the terminal.


while true; do sleep 60; clear; sudo mdadm --detail /dev/md0; echo; cat /proc/mdstat; done

Your array will now have two active drives that are in sync, but because there are not 3 drives, it will not be 100% clean. Remove the failed drive, then resize the array. Note that the --grow flag is a bit of a misnomer - it can mean either grow or shrink:


sudo mdadm /dev/md0 --fail /dev/{failed drive}
sudo mdadm /dev/md0 --remove /dev/{failed drive}
sudo mdadm --grow /dev/md0 --raid-devices=2

With regard to errors, a link problem with the drive (i.e. the PATA/SATA port, cable, or drive connector) is not enough to trigger a failover of a hot spare, as the kernel typically will switch to using the other "good" drive while it resets the link to the "bad" drive. I know this because I run a 3-drive array, 2 hot, 1 spare, and one of the drives just recently decided to barf up a bit in the logs. When I tested all the drives in the array, all 3 passed the "long" version of the SMART test, so it isn't a problem with the platters, mechanical components, or the onboard controller - which leaves a flaky link cable or a bad SATA port. Perhaps this is what you are seeing. Try switching the drive to a different motherboard port, or using a different cable, and see if it improves.




A follow-up: I completed my expansion of the mirror to 3 drives, failed and removed the flaky drive from the md array, hot-swapped the cable for a new one (the motherboard supports this) and re-added the drive. Upon re-add, it immediately started a re-sync of the drive. So far, not a single error has appeared in the log despite the drive being heavily used. So, yes, drive cables can go flaky.


No comments:

Post a Comment

Where does Skype save my contact's avatars in Linux?

I'm using Skype on Linux. Where can I find images cached by skype of my contact's avatars? Answer I wanted to get those Skype avat...