October 5th

Dear Diary,

today the LSI MegaRAID SAS 9240-8i card finally completed building the RAID5 set over five 1TB Samsung SSDs.  It only took about 18 hours.

So time to do some benchmarking!  I created an ext4 file system on the volume and wrote /dev/zero to it.


Err…  40 MB/s?  40MB/s!??!  These are SATA3 6Gbps disks that should have native write speeds of over 400MB/s.  And writing RAID5 to them should be even faster.  40 MB/s is impossibly pathetic.  And the reading speed was about the same.

If I hadn’t seen it myself, I wouldn’t have believed it.

Dear Diary, I finally did what I should have done in the first place: I binged the card.  And I found oodles of people complaining about how slow it is.

This is apparently LSI’s bottom-rung RAID card.  One person speculates that it does the RAID5 XOR calculations on the host side instead of having it implemented on the RAID card.  That doesn’t really account for how incredibly slow it is, though.

I think LSI just put a lot of sleep() calls into the firmware so that they could have a “lower-end” card that wouldn’t compete with the cards they charge a lot more money for.

I went back to the office and reconfigured the SSDs as JBOD, and then I created an ext4 file system on one of them, and then wrote /dev/zero to it, just to see what the native write and read rates are:



Around 400 MB/s.  It’s not astoundingly good, but it’s not completely pitiful, either.  These disks should do over 500 MB/s, but…

Then I created a soft RAID5 over the disks.  How long would it take to build it?


80 minutes.  That’s better than 16 hours, but it seems a bit slow…

Turns out there’s a SPEED_LIMIT_MAX that needs to be tweaked.


With that in place, I get 300 MB/s while building the RAID.  51 minutes.



And it sustains until it’s done, which it has while I was typing this.

Now to check the real performance…

Making the file system was really fast.  It peaked at 1GB/s.  Writing a huge sequential file to the ext4 file system gives me around 180 MB/s.  Which isn’t fantastic, but it’s acceptable.  Reading the same sequential file gives me 1.4 GB/s!  That’s pretty impressive.

It’s incredible that the LSI MegaRAID SAS 9240-8i is 6x-25x slower than Linux soft RAID.  Even if the card offloads the XOR-ing to the host CPU, it still doesn’t explain how its algorithms are that much slower than md’s algorithms.

Anyway: Avoid this card like the plague.  It’s unusable as a RAID card if you need reasonable performance.  40 MB/s is not reasonable.

October 4th

Dear Diary,

today was the day I was going to install a new SSD RAID system for the Gmane news spool.  The old spool kinda ran full four months ago, but I kept deleting (and storing off-spool) the largest groups and waddled through.

I had one server that seemed like a good fit for the new news server: It had 8 physical disk slots.  But the motherboard only had six SATA connectors, so I bought an LSI MegaRAID SAS 9240-8i  card.

Installing the 2.5″ SSDs in a 3.5″ adapter.  Five of the disks
So screwed. I mean, so many screws
I decided to add 2x 4TB spinning mechanical disks to the remaining two slots for, uhm, a search index perhaps?
Oops. I forgot to film the unboxing
Ribbed for your coldness
A tower of SSDs
All seven disks installed in their Supermicro caddies

Look at that gigantic hole, yearning to be filled.
Uhm… these captions took an odd turn back there…
I pull the server out of the rack and totally pop its top
Look! Innards!
Err… is that a photo of the RAID card? I think it might be…
All wired up
The disk backplane had six slots already hooked up, so I viciously ripped three of the connectors out
And plugged five of the connectors from the RAID card back in
And then my biggest fans were reinstalled. Thank you thank you

Now the hardware was all installed, but getting to the LSI WebBIOS configuration was impossible.


I hit Ctrl+H repeatedly while booting, but I always just got the Linux boot prompt instead.  I binged around a bit, and it turns out that if any other non-RAID disks are present in the system, the WebBIOS won’t appear at all.

So I popped the three spinning disks out of the machine, and presto:


I configured the RAID5 over the five 1TB SSDs.  This would give me about 4TB, which is twice what the current news spool has.

However, building the RAID seems to take forever:


WTF?  2% in 20 minutes?

These are SATA3 SSDs.  Read/write speed is over 500MB/s.  That means that reading or writing a single disk should take under 30 minutes.  Since the card can access all the disks independently, computing the RAID5 XOR shouldn’t take more than that.

But let’s be generous.  Let’s say it has to read the disks sequentially, and write the parity disk sequentially.  That’s 2.5 hours.

Instead it’s going to take 16 hours.  WTF is up with that?  Does the LSI MegaRAID SAS 9240-8i have the slowest CPU ever or something?

That’s just unbelievably slow.

Diary, I’m going to let it continue building the RAID and then do some speed tests.  If it turns out that it’s this slow during operation, I’m going to rip it out and just do software RAID.

New Gmane SSDs

The Gmane news spool is 97% full, so I either had to delete some Gwene stuff, or buy more SSDs. Image

I bought more SSDs.  The current setup is 5x 512GB Samsungs in RAID5.  I bought 5x 1TB while in the US, so that gives us 2x the current size in RAID5, which should be enough for the next uhm five years? or so?

But the problem is how to do the switchover.  The last time it was pretty seamless.  I set up a new, spiffy machine with a spiffy hardware RAID controller. (See my secret diary for details.) Then synced all the articles over, and swapped some IP addresses at the end.

I really don’t want to buy another spiffy RAID controller this time.  But if I’m reusing the current hardware, there’s going to be downtime.

Here’s the plan:

1) Sync the spool over to an external SATA disk.

2) Take the server down, swap in the new SSDs, set up the RAID.

3) Rsync the articles from the external SATA disk to the RAID.

5) Profit!

2-5 will realistically take a day or so, with Gmane being totally dead while this is happening.  I think.

Hm…  or I could point the spool to the external disk while doing 3.  In read only mode.  Then there should only be a few hours downtime while I’m doing the RAID rebuild.  Hm.  Yes, I think that sounds doable…

So a few hours complete deadness, and a day or so in read-only mode.

But it’s a bit scary doing it this way.  If I were doing a completely separate new server, there would always be an easy way to roll back if things don’t work…


Software Sucks

The machine that runs my stereo is a seven year old Hush machine.  The last time I rebooted it, it kinda didn’t want to boot at all, except then it did anyway. 

So I’m thinking that it’s probably going off for the fjords one of these days.

To avoid some horrible music-less days I’ve bought a replacement machine.

It has the same stereo-ey form factor, which is nice.

Not a lot on the inside, but it has room for the PCI sound card, which is the important thing.  And no fans, of course.

Heat pipes!  It’s a 2.8GHz Ivy Bridge machine, so it has plenty of oomph.

It has an external PSU, but there’s so much room in the case that I just put it inside.

PCI riser will probably fit the sound card.

Anyway, since I had a new machine to play with, I connected it up to the CD ripping box to see whether I could get higher ripping speeds with this machine.

The computer says no.

So, basically, if I’m ripping straight from the SATA plugs on the main board, and I’m ripping a single CD, I get a ripping speed of 25x.

If I rip three CDs in parallel, I get 8x on each CD.

This is just pitiful.  Four years ago I got over 40x on each CD while ripping in parallel.  Something must have happened to the Linux kernel to make ripping from SATA be dog slow.

Anybody know what?

Too bad I can’t just install a four-year-old kernel.  It won’t support the new chipsets etc, so it just won’t work.

Linux sucks.

Hardware Sucks

Once upon a time, I had a nice Hush machine with a PCI slot.  It had a SATA Multilane card in it, connected to an external box with three SATA CD-ROMs installed.  I used it to rip the CDs I bought so that I could listen to them.

It was perfect.  It was rock solid.  It was fast.  I got reading speeds of 50x — from all three CD-ROMs simultaneously.  So I got an aggregated speed of 150x when ripping a lot of CDs.  Life was wonderful.

Then the Hush machine died.  And I bought a new, spiffy machine.  That only had a PCI Express slot.  But it had USB3 and E-SATA. 

USB3/SATA adapter

So I first tried connecting the CD-ROMs via a mixture of USB3 adapters and E-SATA.

E-SATA backplane for the box

It kinda worked.  Ripping speeds were not impressive, but OK.  But it was unstable.  It would totally crash the machine like every twenty minutes.

So I tried going with a pure USB3 solution.  Good reading speeds, but equally unstable.  My guess would be that extracting audio from CDs via USB3 hasn’t received a lot of testing in Linux. Which is understandable, but it still sucks.

USB3/SATA PM backplane

Next, I noticed that the Addonics web site listed a SATA Port Multipler/USB3 backplane.  It seemed to say that it would allow just using a single USB3 port, and access all the CD players.  Perhaps that would be stable!

Unfortunately, I didn’t read the fine print.  Accessing the drives individually was only possible when using the SATA Port Multiplier interface, and not the USB3 one.

SATA Port Multiplier card

So I bought a SATA PM-capable card, since the Intel SATA chipset doesn’t do PM.

It almost fit into the machine when I removed the bracket.  And it worked!  And was totally stable.

Unfortunately, it’s dog slow.  When ripping a single CD, I get only 20x.  When ripping CDs in parallel, the speeds vary a lot, but ends up evening out at 8x. 

That’s pitiful.

Which brings us up to date.  Today I got a PCI/PCI Express adapter, which  would allow me to try using the old SATA Multilane card.

PCI Express/PCI Adapter
With the Multilane card installed

Multilane plugs are big

 Now, installing all this into the rather small machine took some… persuasion.

 Bendy CPU coolers FTW.

I installed the Multilane backplane back into the CD-ROM box.

 And….  success?


It’s stable.  It doesn’t crash.  But it’s slow!  Slow!  Ripping a single CD gives me 26x.  Ripping three ones in parallel gives me 10x per CD.  So it seems like it tops out at a bandwidth of 30x aggregated.  That’s pitiful.  My old machine gave me 150x.  Is that progress?

It just seems kinda inconceivable that the machine itself is the bottleneck here.  30x CD is (* 44100 2 2 30) => 5292000 bytes per second.  That is 5MB/s.  That is nothing.  The machine has an Intel SSD.  PCI speed is 133MB/s.  The SSD should do 250MB/s. 

But it doesn’t.

I almost feel like giving up and try to refurbish an old Hush machine, but I’m not sure I have the stamina.

Hardware sucks.