Tag Archives: r1soft

FakeRAID, Linux, and R1Soft

FakeRAID and Linux aren’t really friends.

What is fakeRAID?
In the last few years, a number of hardware products have come onto the market claiming to be IDE or SATA RAID controllers. These have shown up in a number of desktop/workstation motherboards and lower-end servers. Virtually none of these are true hardware RAID controllers. Instead, they are simply multi-channel disk controllers combined with special BIOS configuration options and software drivers to assist the OS in performing RAID operations. This gives the appearance of a hardware RAID, because the RAID configuration is done using a BIOS setup screen, and the operating system can be booted from the RAID. With the advent of Terabyte disk drives, FakeRAID is becoming a popular option for entry-level small business servers to simply mirror 2 1.5 TB drives, and dispense with an expensive hardware RAID 5 array.

Older Windows versions required a driver loaded during the Windows install process for these cards, but that is changing. Under Linux, which has built-in softRAID functionality that pre-dates these devices, the hardware is normally seen for what it is — multiple hard drives and a multi-channel IDE/SATA controller. Hence, fakeRAID.

Source: https://help.ubuntu.com/community/FakeRaidHowto

As above, normally in Linux your BIOS RAID can’t be seen, you’ll rather have your normal /dev/sd[abc] disks, which proves challenging.

Historically, there has been no need to use the BIOS RAID with Linux, MDADM has done a good enough, or even better job, that is, until a unique use-case came up.

R1Soft by Idera provides file-level and block-level backups via their software. It works rather well, and has a bare-metal agent to boot into when you run into a rock and need your data for some reason.
We wanted to test a restore of a physical Windows server, which is normally just 2 x 1TB disks in a software RAID 1 mirror, and would be easy enough to do, boot to the rescue restore CD, point to a disk, and shoot, then just re-add your second disk to the RAID, but in this case, the client had 3 x 1TB disks in RAID 5, so a single disk restore wasn’t an option.

We needed the FakeRAID to hand over a 1.8TB disk to the restore agent in order to send over the data, and Linux doesn’t see our FakeRAID. An easy fix would be to slot in a RAID card, or, a single 2TB disk for the sake of the restore, but, Douglas said half-jobs are bad.

“Paul Marrapese” wrote a great article “Arch Linux and Intel RST (“Fake RAID”)
In the article, he writes about creating your raid in Linux by telling MDADM to use external metadata, and that actually works perfectly, the RAID is even detected by the RST BIOS ROM, but in the Ubuntu rescue CD, it was totally unusable. The RAID remained resync pending, and read-only. Nothing I could think of could get the raid r/w.

I blame the Ubuntu version the CD is running, the modules compiled with it, or something.

Moving on, booting into a CentOS 7 Live Rescue, it detects and starts the RAID volume and even starts the resync without any prompt, and read/write. So we’re gonna need to find a way to do it from here.

FakeRAID seems to work out the box with CentOS 7’s rescue environment!

YUM isn’t really usable on the live ISO, so I installing the R1 Agent here isn’t really an option.
I decided to mount my CD to the second media port via the BMC on the server, and get a CHROOT into the rescue CD. That worked, and my R1 Agent could start, but a CD is read-only, so services couldn’t start correctly, so this wasn’t gonna work.
How did they get the system rw on the bootable restore? Let’s go back and see:

Here we see our root volume is overlayfs, WTF is that?!

OverlayFS provides a great way to merge directories or filesystems such that one of the filesystems (called the “lower” one) never gets written to, but all changes are made to the “upper” one. This is great for systems that rely on having read-only data, but the view needs to be editable, such as Live CD’s and Docker containers/images (image is read only). This also allows you to quickly add some storage to an existing filesystem that is running out of space, without having to alter any structures. It could also be a useful component of a backup/snapshot system.

https://blog.programster.org/overlayfs

Right, that makes perfect sense! You have a read-only volume, and a scratch area, where all the differencing is saved. Above, we see the the cd on /dev/loop0 is moutned to /rofs, read only. Magic?

Lets get back to CentOS then and see if we can make this work. An example on the link above shows how we can get it going:

Mount it!

Once our filesystem is read-write, we do the usual mounting of sys, proc and dev, then we chroot into our rw filesystem, configure our IP addresses, and start the CDP agent.

Yay!

Now we get back to R1Soft’s web UI, point to our agent, restore partitions to the FakeRAID, and run the restore.

Job done.