HowTo: Fix inactive linux md raid array state
When you can assemble previously created raid array, but for some reasons it’s run in inactive state + and/or all the drives in md array is marked with: (S)->which means SPARE DRIVE.
In my case there was raid5 array from 4 drives with one failed drive. So firstly raid was successfully run with 3 out from 4 drives, but after some terabytes of data copied one of the running drives starts to print errors and finally degraded and stop the array ( in my case it was /dev/md2 )
So what to do:
first investigate:
dmesg cat /proc/mdstat mdadm --examine -v /dev/md2 mdadm --detail -v /dev/md2 mdadm --examine -v /dev/sd[b,c,d]3
then try to assemble the all arrays( there was 3 arrays from partitions on that 3 working hard drives):
mdadm --assemble --scan # it will scan all drives for md magic numbers and will try to run raid arrays found.
then see what’s what:
cat /proc/mdstat mdadm --detail /dev/md2
I see for /dev/md2 that: array state is: inactive, all 4 drives are marked with (S). So, here is what to do to give back live to this array.
mdadm --stop /dev/md2 mdadm -A --force /dev/md2 voila!
you can do fsck before mounting the partition.
NOTE: in my case , the partition which cause the array fail was reported with wrong timestamp ( in output from mdadm .. -v commands ). So that shows me no data was out of sync and that’s why breavly –force the array activation.