whiteshepherd
Posts: 48
Joined: Thu Nov 03, 2011 7:59 pm

AM I defaulting to a degraded Raid5 array?

Tue Nov 19, 2019 10:46 pm

I have a Raspberry Pi 4 with 2GB memory. One of the things I like using my Pi for is a network samba server. I get large external drives and share with with my family using samba for their storage. It works fine except when a external drive fails it's %50 the USB controller. If you open your drive you lose your warranty. If you send the drive back you lose your data.

To deal with this I decided to create a raid5 to maximize space while still having some redundancy. My reasoning is if a drive fails I can send it back for a replacement and still keep all my data.

So I bought on sale 3 6TB Western digital external hard drives (all identical plugged into a USB3 hub to the Pi). These were /dev/sda1 /dev/sdb1 and /dev/sdc1 . I created the raid with this line.

mdadm --create --verbose /dev/md0 --level=5 --raid-devices=3 /dev/sda1 /dev/sdb1 /dev/sdc1

However it looks like one of the drives is not being used.
Typing:
cat /proc/mdstat

Personalities : [raid1] [raid6] [raid5] [raid4]
md0 : active raid5 sdc1[3] sdb1[1] sda1[0]
11720710144 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/2] [UU_]
[>....................] recovery = 4.0% (235963772/5860355072) finish=1257.8min speed=74524K/sec
bitmap: 0/44 pages [0KB], 65536KB chunk

unused devices: <none>

While typing:

mdadm -D /dev/md0

/dev/md0:
Version : 1.2
Creation Time : Tue Nov 19 21:47:49 2019
Raid Level : raid5
Array Size : 11720710144 (11177.74 GiB 12002.01 GB)
Used Dev Size : 5860355072 (5588.87 GiB 6001.00 GB)
Raid Devices : 3
Total Devices : 3
Persistence : Superblock is persistent

Intent Bitmap : Internal

Update Time : Tue Nov 19 22:43:41 2019
State : clean, degraded, recovering
Active Devices : 2
Working Devices : 3
Failed Devices : 0
Spare Devices : 1

Layout : left-symmetric
Chunk Size : 512K

Consistency Policy : bitmap

Rebuild Status : 4% complete

Name : server:0 (local to host server)
UUID : a7bd0076:6e221518:1dc423fc:dd76e480
Events : 661

Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
3 8 33 2 spare rebuilding /dev/sdc1


It looks like it's only using 2 of the drives and the third is a hot spare? Is this correct for Raid5 or did something go wrong? Any ideas?

whiteshepherd
Posts: 48
Joined: Thu Nov 03, 2011 7:59 pm

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 12:08 am

mdadm.conf shows:
ARRAY /dev/md0 metadata=1.2 spares=1 name=server:0 UUID=a7bd0076:6e221518:1dc423fc:dd76e480

dustnbone
Posts: 96
Joined: Tue Nov 05, 2019 2:49 am

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 12:11 am

It looks to just be building the parity data on the third drive. Says it will take 21 hours. Have you copied data to the array?

trejan
Posts: 882
Joined: Tue Jul 02, 2019 2:28 pm

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 1:25 am

This is normal. The Linux RAID creates a degraded array and then immediately starts rebuilding it so it can generate the necessary parity data. It allows you to have the parity generation running in the background instead of waiting for an extremely long array creation process.

whiteshepherd
Posts: 48
Joined: Thu Nov 03, 2011 7:59 pm

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 1:44 am

So the Spare Device listed is normal for raid5 3 drives?

trejan
Posts: 882
Joined: Tue Jul 02, 2019 2:28 pm

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 2:52 am

whiteshepherd wrote:
Wed Nov 20, 2019 1:44 am
So the Spare Device listed is normal for raid5 3 drives?
It will become an active drive once it has finished rebuilding.

whiteshepherd
Posts: 48
Joined: Thu Nov 03, 2011 7:59 pm

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 10:34 pm

Thanks all the the info and heads up letting me know it's doing what it should. I'm letting it build %80 so far. When that finished I will format it to EXT4 and set up samba directories for each family member. Will give us 12GB of storage backed up by raid 5. Family likes to store their videos and pictures on it (LOTS). Takes a lot of space. But I wanted a redundancy backup hence Raspberry Pi Raid server to the rescue.

Question if someone knows? In the future if I want to add more external drives to add space (would get exact same drive) to accommodate more needed space. Can I just add additional drives to the array without losing data or will it need to start out from scratch again since data is spread across all the drives? If I cannot just add them I may build a second raid "md1" and leave the first alone with it's data.

trejan
Posts: 882
Joined: Tue Jul 02, 2019 2:28 pm

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 20, 2019 11:04 pm

whiteshepherd wrote:
Wed Nov 20, 2019 10:34 pm
Takes a lot of space. But I wanted a redundancy backup hence Raspberry Pi Raid server to the rescue.
RAID isn't a backup. It just gives you protection against a single drive failure. If multiple drives fail, ransomware encrypts everything or a mistake causes your array to be wiped then everything is gone. You need an actual backup system as well if this is important data.

You have a much higher of a drive failure when your array is rebuilding because all the drives will be busy for hours/days/weeks.

There are other options like RAID6 which uses two parity drives so should be able to cope with two drive failures or ZFS which has much more advanced features than plain RAID
whiteshepherd wrote:
Wed Nov 20, 2019 10:34 pm
Can I just add additional drives to the array without losing data or will it need to start out from scratch again since data is spread across all the drives?
You can add drives or swap to larger drives. Be aware that it will use the smallest drive capacity e.g. if you've got 3x 1TB drives and add an 8TB drive then it will only use 1TB out of that 8TB drive. It will only be able to utilise the rest of that 8TB drive once you've replaced all the other 1TB drives with 8TB+ ones.

User avatar
rpdom
Posts: 15589
Joined: Sun May 06, 2012 5:17 am
Location: Chelmsford, Essex, UK

Re: AM I defaulting to a degraded Raid5 array?

Thu Nov 21, 2019 5:38 am

whiteshepherd wrote:
Wed Nov 20, 2019 10:34 pm
Question if someone knows? In the future if I want to add more external drives to add space (would get exact same drive) to accommodate more needed space. Can I just add additional drives to the array without losing data or will it need to start out from scratch again since data is spread across all the drives? If I cannot just add them I may build a second raid "md1" and leave the first alone with it's data.
For that you might want to run LVM on top of your RAID system before creating a partition.
With LVM you could add another RAID array to your system and add them to the LVM setup. Then you can either just extend the filesystem onto them, or, if they are larger disks, move the existing file system to them and then delete the old RAID array.

swampdog
Posts: 276
Joined: Fri Dec 04, 2015 11:22 am

Re: AM I defaulting to a degraded Raid5 array?

Sat Nov 23, 2019 1:43 am

If you can afford another one, buy another disk and use raid6 because raid5 has a nasty habit of failing twice in a row. You'll have less space (twice size of disk total) but can have two disks fail. raid5 was never designed for disks the size of current offerings. Typically you purchase all the disks at the same time and they get used evenly so when one fails the others are eol: the unusual effort of resyncing all the sectors quite often causes one of the remaining ones to drop out.

I'd also implement LVM as rpdom suggests. I have a couple of HP servers and one was running out of space. They have esata ports so what I did was plug in an esata caddy and, one by one, plonk a bigger disk into the caddy, add it to the array, force all the data off an internal disk, then fail the internal. Replace internal with caddy. Repeat until all internal are done. You can cheat and just add the caddy as a hotspare & straight off fail an internal but that leaves you with only one disk resilience. Once all internal disks are done you can grow the mdadm array then distribute the space across the LVM logical volumes as desired. Take care with mdadm/LVM not to accidentally "clone" the "UUID" else bad things happen. eg: the disk has been used elsewhere and not properly blatted before being inserted into the array.

I must be losing it because I'm sure I posted similar a few days ago! Anyway, in an ideal rpi4 NAS world I'd have two usb hubs plugged into each pi, both with three disks on each hub. Each hub would have two of the raid6 disks attached plus a hotspare so even if a hub vanished the remaining one would start rebuilding on its hotspare. If someone were to manufacture a usb->sata (4 port) doofrey with power enough for four spinning rust drives I'd buy it.

User avatar
rpdom
Posts: 15589
Joined: Sun May 06, 2012 5:17 am
Location: Chelmsford, Essex, UK

Re: AM I defaulting to a degraded Raid5 array?

Sat Nov 23, 2019 6:15 am

swampdog wrote:
Sat Nov 23, 2019 1:43 am
Typically you purchase all the disks at the same time and they get used evenly so when one fails the others are eol: the unusual effort of resyncing all the sectors quite often causes one of the remaining ones to drop out.
We had that happen a few times back in the days when I worked with big servers in a data centre. We eventually persuaded our buyers to buy disks from several different suppliers at the same time so we could mix and match them. It was quite important when running a storage unit with a couple of hundred or so disks that we didn't get more than a couple failing at any one time.

swampdog
Posts: 276
Joined: Fri Dec 04, 2015 11:22 am

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 27, 2019 2:59 am

rpdom wrote:
Sat Nov 23, 2019 6:15 am
swampdog wrote:
Sat Nov 23, 2019 1:43 am
Typically you purchase all the disks at the same time and they get used evenly so when one fails the others are eol: the unusual effort of resyncing all the sectors quite often causes one of the remaining ones to drop out.
We had that happen a few times back in the days when I worked with big servers in a data centre. We eventually persuaded our buyers to buy disks from several different suppliers at the same time so we could mix and match them. It was quite important when running a storage unit with a couple of hundred or so disks that we didn't get more than a couple failing at any one time.
Last reasonably big place I worked at, it was the "nobody ever got sacked for buying ibm" mantra. It was okay while we had spares onsite but not once management heard about the "just in time" mantra as well - for stock. No amount of persuasion could make them understand $JIT needed to be greater than zero!

User avatar
rpdom
Posts: 15589
Joined: Sun May 06, 2012 5:17 am
Location: Chelmsford, Essex, UK

Re: AM I defaulting to a degraded Raid5 array?

Wed Nov 27, 2019 6:19 am

swampdog wrote:
Wed Nov 27, 2019 2:59 am
Last reasonably big place I worked at, it was the "nobody ever got sacked for buying ibm" mantra.
Deathstars?

swampdog
Posts: 276
Joined: Fri Dec 04, 2015 11:22 am

Re: AM I defaulting to a degraded Raid5 array?

Thu Nov 28, 2019 1:50 am

rpdom wrote:
Wed Nov 27, 2019 6:19 am
swampdog wrote:
Wed Nov 27, 2019 2:59 am
Last reasonably big place I worked at, it was the "nobody ever got sacked for buying ibm" mantra.
Deathstars?
I guess you could call contracting for a public body a deathstar. :-)

trejan
Posts: 882
Joined: Tue Jul 02, 2019 2:28 pm

Re: AM I defaulting to a degraded Raid5 array?

Thu Nov 28, 2019 2:19 am

swampdog wrote:
Thu Nov 28, 2019 1:50 am
rpdom wrote:
Wed Nov 27, 2019 6:19 am
swampdog wrote:
Wed Nov 27, 2019 2:59 am
Last reasonably big place I worked at, it was the "nobody ever got sacked for buying ibm" mantra.
Deathstars?
I guess you could call contracting for a public body a deathstar. :-)
rpdom is talking about the disasterous IBM Deskstar hard disks from around 2000. One particular HD had an incredibly high failure rate and caused the name IBM Deathstar to be coined. If your HD died then it was disasterous as it would cause all the heads to crash into the platters so even data recovery companies couldn't get anything back.

swampdog
Posts: 276
Joined: Fri Dec 04, 2015 11:22 am

Re: AM I defaulting to a degraded Raid5 array?

Thu Nov 28, 2019 2:57 am

trejan wrote:
Thu Nov 28, 2019 2:19 am
swampdog wrote:
Thu Nov 28, 2019 1:50 am
rpdom wrote:
Wed Nov 27, 2019 6:19 am

Deathstars?
I guess you could call contracting for a public body a deathstar. :-)
rpdom is talking about the disasterous IBM Deskstar hard disks from around 2000. One particular HD had an incredibly high failure rate and caused the name IBM Deathstar to be coined. If your HD died then it was disasterous as it would cause all the heads to crash into the platters so even data recovery companies couldn't get anything back.
Doh!

Fortunately we didn't have any of those.

User avatar
rpdom
Posts: 15589
Joined: Sun May 06, 2012 5:17 am
Location: Chelmsford, Essex, UK

Re: AM I defaulting to a degraded Raid5 array?

Thu Nov 28, 2019 7:40 am

swampdog wrote:
Thu Nov 28, 2019 2:57 am
trejan wrote:
Thu Nov 28, 2019 2:19 am
swampdog wrote:
Thu Nov 28, 2019 1:50 am


I guess you could call contracting for a public body a deathstar. :-)
rpdom is talking about the disasterous IBM Deskstar hard disks from around 2000. One particular HD had an incredibly high failure rate and caused the name IBM Deathstar to be coined. If your HD died then it was disasterous as it would cause all the heads to crash into the platters so even data recovery companies couldn't get anything back.
Doh!

Fortunately we didn't have any of those.
Yes, those were the ones. Fortunately we didn't have a large number of them. But we did have other "interesting" hardware failures.

Return to “Networking and servers”