<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ru">
	<id>https://support.qbpro.ru/index.php?action=history&amp;feed=atom&amp;title=HowTo%3A_Fix_inactive_linux_md_raid_array_state</id>
	<title>HowTo: Fix inactive linux md raid array state - История изменений</title>
	<link rel="self" type="application/atom+xml" href="https://support.qbpro.ru/index.php?action=history&amp;feed=atom&amp;title=HowTo%3A_Fix_inactive_linux_md_raid_array_state"/>
	<link rel="alternate" type="text/html" href="https://support.qbpro.ru/index.php?title=HowTo:_Fix_inactive_linux_md_raid_array_state&amp;action=history"/>
	<updated>2026-05-14T15:17:21Z</updated>
	<subtitle>История изменений этой страницы в вики</subtitle>
	<generator>MediaWiki 1.38.1</generator>
	<entry>
		<id>https://support.qbpro.ru/index.php?title=HowTo:_Fix_inactive_linux_md_raid_array_state&amp;diff=3731&amp;oldid=prev</id>
		<title>Vix: Новая страница: «When you can assemble previously created raid array, but for some reasons it’s run in inactive state + and/or all the drives in md array is marked with: (S)-&gt;which means SPARE DRIVE.  In my case there was raid5 array from 4 drives with one failed drive. So firstly raid was successfully run with 3 out from 4 drives, but after some terabytes of data copied one of the running drives starts to print errors and finally degraded and stop the array ( in my case i...»</title>
		<link rel="alternate" type="text/html" href="https://support.qbpro.ru/index.php?title=HowTo:_Fix_inactive_linux_md_raid_array_state&amp;diff=3731&amp;oldid=prev"/>
		<updated>2023-06-05T15:23:34Z</updated>

		<summary type="html">&lt;p&gt;Новая страница: «When you can assemble previously created raid array, but for some reasons it’s run in inactive state + and/or all the drives in md array is marked with: (S)-&amp;gt;which means SPARE DRIVE.  In my case there was raid5 array from 4 drives with one failed drive. So firstly raid was successfully run with 3 out from 4 drives, but after some terabytes of data copied one of the running drives starts to print errors and finally degraded and stop the array ( in my case i...»&lt;/p&gt;
&lt;p&gt;&lt;b&gt;Новая страница&lt;/b&gt;&lt;/p&gt;&lt;div&gt;When you can assemble previously created raid array, but for some reasons it’s run in inactive state + and/or all the drives in md array is marked with: (S)-&amp;gt;which means SPARE DRIVE.&lt;br /&gt;
&lt;br /&gt;
In my case there was raid5 array from 4 drives with one failed drive. So firstly raid was successfully run with 3 out from 4 drives, but after some terabytes of data copied one of the running drives starts to print errors and finally degraded and stop the array ( in my case it was /dev/md2 )&lt;br /&gt;
&lt;br /&gt;
So what to do:&lt;br /&gt;
&lt;br /&gt;
first investigate:&lt;br /&gt;
&lt;br /&gt;
 dmesg&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
 mdadm --examine -v /dev/md2&lt;br /&gt;
 mdadm --detail -v /dev/md2&lt;br /&gt;
 mdadm --examine -v /dev/sd[b,c,d]3&lt;br /&gt;
then try to assemble the all arrays( there was 3 arrays from partitions on that 3 working hard drives):&lt;br /&gt;
&lt;br /&gt;
 mdadm --assemble --scan  # it will scan all drives for md magic numbers and will try to run raid arrays found.&lt;br /&gt;
then see what’s what:&lt;br /&gt;
&lt;br /&gt;
 cat /proc/mdstat&lt;br /&gt;
 mdadm --detail /dev/md2&lt;br /&gt;
I see for /dev/md2 that: array state is: inactive, all 4 drives are marked with (S). So, here is what to do to give back live to this array.&lt;br /&gt;
&lt;br /&gt;
 mdadm --stop /dev/md2&lt;br /&gt;
 mdadm -A --force /dev/md2&lt;br /&gt;
 voila!&lt;br /&gt;
&lt;br /&gt;
you can do fsck before mounting the partition.&lt;br /&gt;
&lt;br /&gt;
NOTE: in my case , the partition which cause the array fail was reported with wrong timestamp ( in output from mdadm .. -v commands ). So that shows me no data was out of sync and that’s why breavly –force the array activation.&lt;br /&gt;
&lt;br /&gt;
* [https://iamsto.wordpress.com/2021/06/30/howto-fix-inactive-linux-md-raid-array-state/ original]&lt;/div&gt;</summary>
		<author><name>Vix</name></author>
	</entry>
</feed>