Entering the Clown

I’ve always been the self hosting kind of guy (i.e., old), but with recent changes I’m trying to simplify and move things around.

I’m not quite sure where I’ll end up with my main server(s) yet, and I’m testing out various things, but for my one self-hosted WordPress instance, I thought I could try something clownish.

I’ve used WordPress.com for this blog since forever, and I’m probably not going to change that, because it’s nice and easy and I don’t have to do any admin work at all. However, hosting on WordPress.com has some severe limitations, like not being able to put s into the pages, or use my own Javascript to make things… more fun.

So earlier this year I installed WordPress on my main server, and… It was just kinda painful to try to make that even remotely secure? I mean, you have to give the web server write access to its own PHP files? I mean, that’s the number one thing you’re never ever supposed to do? That means that any tine error in any plugin could lead to a convenient sploit on your server?

Trying to mitigate that was just a mess, so I thought that I’d at least move that blog somewhere else, so when I move my main server, I can forget about doing WordPress on the new location.

So I, at random, went with DigitalOcean, because it looked so simple: It even has a one-click way to install a new “droplet” (which is their cutesy name for a virtual server) with a complete WordPress install, with UFW (firewall) and basically everything… just there.

And it worked!

Reader, I am now in the clown with my Pacific Comics blog.

It was very painless. It took a while for it to import the media from my old self-hosted blog, but everything else worked way better than I had expected. And best of all, it’s pretty buzzword free: It’s just a virtual server that I can ssh into and do whatever, if I should so choose.

I feel so modern! Just a decade or two after everybody else!

Linux Can 4K @ 60 Haz

I tried getting 4K @ 60Hz using Intel built-in graphics, and I failed miserably.

Rather than spend more days on that project (yes, this is the year of Linux on the TV, I Mean Desktop), I bought a low-profile Nvidia card, since there are several people on the interwebs that claim to have gotten that to work.

It’s brand new, cheap and fanless, too: A Geforce GT 1030, which is Nvidia’s new budget card, launched last month, I think.

It’s finnier than Albert and takes up several PCI Express slots.

However, that’s not really a problem in this media computer: Lots and lots of space for it to spread itself out over. Just one PCI Express slot, though.

But it’s on the long side: If I had any hard disks installed in this machine, I would have had to get creative. Instead I just removed that HD tray thing.

But! There’s two … capacitors down there where the PCI Express “key” thing. Like just quarter millimetre too much to the right…

I bent them ever so gently over and I managed to get the card in. How utterly weird.


Anyway: Mission complete. This card has a DVI plug in addition to the HDMI, but I’m not going to use that, so I just left it with the protective rubber.

See? Lots of space. Of course, it would have been better to remove the cooler completely and hook it up via heat pipes to the chassis, but… that’s like work.

But did this solve my problems? After installing Nvidia’s proprietary drivers (apparently Nouveau doesn’t support the GT 1030 yet, since it’s a Kepler card)…

Yes! 3840×2160 @ 59.95 Hz, which is pretty close to 60Hz. Yay!

Of course, I have no 4K material on my computer, so the only thing that’s actually in 4K now is my Emacs-based movie interface. Here’s whatsername from Bewitched in 2K:

Eww! How awful! Horrible!

See! 4K! How beautiful!

(Let’s pretend that the entire difference isn’t in the different moire patterns!)


And the Futura looks much better in 4K too, right?


This was all worth it.

One Thing Leads To Another

In the previous installment, I got a new monitor for my stereo computer.

I thought everything was fine, but then I started noticing stuttering on flac playback. After some investigation, it seems as if X is on (and displaying stuff on this new, bigger monitor), and there’s network traffic, then the flac123 process is starved for time slices, even if the flac123 process is running with realtime priority.


Now, my stereo machine is very, very old. As far as I can tell, it’s from 2005, and is basically a laptop mainboard strapped into a nice case:

(It’s the black thing in the middle.) But even if it’s old, its requirements hadn’t really changed since I got it: It plays music and samples music and routes music to various rooms via an RME Multiface box. So I was going to use it until it stopped working, but I obviously can’t live with stuttering music and I didn’t want to spend more time on this, so I bought a new machine from QuietPC.

There’s not a lot inside, so I put the external 12V pad into the case. Tee hee. Well, thermally that’s probably not recommended, but it doesn’t seem to be a problem.

Nice heat pipes!

Look how different the new machine is! Instead of the round, blue LED power lamp, it’s now a… while LED power lamp. And it’s about 2mm wider than the old machine, but you can’t tell unless you know and then it annoys the fuck out of you.


Anyway, installation notes: Things basically work, but Debian still haven’t fixed their installation CDs to work on machines with NVMe disks. When it fails to install grub, you have to say:

mount --bind /dev /target/dev 
mount --bind /dev/pts /target/dev/pts 
mount --bind /proc /target/proc 
mount --bind /sys /target/sys 
cp /etc/resolv.conf /target/etc 
chroot /target /bin/bash 
aptitude update a
ptitude install grub-efi-amd64 
grub-install --target=x86_64-efi /dev/nvme0n1

Fixing this should have been kinda trivial and warranted fixing, wouldn’t you think? But they haven’t, and it’s been that way for a year…

Let’s see… anything else? Oh, yeah, I had to install a kernel and X from jessie backports, because the built-in Intel graphics are too new for Debian Stale. I mean Stable. Put

deb http://ftp.uio.no/debian/ jessie-backports main contrib

into /etc/apt/sources.list and say

apt -t jessie-backports install linux-image-amd64 xserver-xorg-video-intel

although that may fail according to the phase of the moon, and I had to install linux-image-4.9.0-0.bpo.2-amd64 instead…

And the RME Multiface PCIe card said:

snd_hdsp 0000:03:00.0: Direct firmware load for multiface_firmware_rev11.bin failed with error -2

I got that to work by downloading the ALSA firmware package, compiling and installing the result as /lib/firmware/multiface_firmware_rev11.bin.

Oh, and the old machine was a 32 bit machine, so my C programs written in the late 90s had hilarious code like

(char*)((unsigned int)buf + max (write_start - block_start, 0)

that no longer happened to work (by accident) on a 64 bit machine. And these programs (used for splitting vinyl albums into individual songs and the like) are ridiculously fast now. The first time I ran it I thought there must have been a mistake, because it had split the album by the time I had released the key ordering the album to be split.

That’s the difference between a brand new NVMe disk and a first generation SSD. Man, those things were slow…

And the 3.5GHz Kaby Lake CPU probably doesn’t make things worse, either.

Vroom vroom. Now I can listen to music 10x faster than before. With the new machine, the flac files play with a more agile bassline and well-proportioned vocals, with plenty of details in a surefooted rhythmic structure: Nicely layered and fairly large in scale, but not too much authority or fascism.

Also: Gold interconnects.

October 5th

Dear Diary,

today the LSI MegaRAID SAS 9240-8i card finally completed building the RAID5 set over five 1TB Samsung SSDs.  It only took about 18 hours.

So time to do some benchmarking!  I created an ext4 file system on the volume and wrote /dev/zero to it.


Err…  40 MB/s?  40MB/s!??!  These are SATA3 6Gbps disks that should have native write speeds of over 400MB/s.  And writing RAID5 to them should be even faster.  40 MB/s is impossibly pathetic.  And the reading speed was about the same.

If I hadn’t seen it myself, I wouldn’t have believed it.

Dear Diary, I finally did what I should have done in the first place: I binged the card.  And I found oodles of people complaining about how slow it is.

This is apparently LSI’s bottom-rung RAID card.  One person speculates that it does the RAID5 XOR calculations on the host side instead of having it implemented on the RAID card.  That doesn’t really account for how incredibly slow it is, though.

I think LSI just put a lot of sleep() calls into the firmware so that they could have a “lower-end” card that wouldn’t compete with the cards they charge a lot more money for.

I went back to the office and reconfigured the SSDs as JBOD, and then I created an ext4 file system on one of them, and then wrote /dev/zero to it, just to see what the native write and read rates are:



Around 400 MB/s.  It’s not astoundingly good, but it’s not completely pitiful, either.  These disks should do over 500 MB/s, but…

Then I created a soft RAID5 over the disks.  How long would it take to build it?


80 minutes.  That’s better than 16 hours, but it seems a bit slow…

Turns out there’s a SPEED_LIMIT_MAX that needs to be tweaked.


With that in place, I get 300 MB/s while building the RAID.  51 minutes.



And it sustains until it’s done, which it has while I was typing this.

Now to check the real performance…

Making the file system was really fast.  It peaked at 1GB/s.  Writing a huge sequential file to the ext4 file system gives me around 180 MB/s.  Which isn’t fantastic, but it’s acceptable.  Reading the same sequential file gives me 1.4 GB/s!  That’s pretty impressive.

It’s incredible that the LSI MegaRAID SAS 9240-8i is 6x-25x slower than Linux soft RAID.  Even if the card offloads the XOR-ing to the host CPU, it still doesn’t explain how its algorithms are that much slower than md’s algorithms.

Anyway: Avoid this card like the plague.  It’s unusable as a RAID card if you need reasonable performance.  40 MB/s is not reasonable.

October 4th

Dear Diary,

today was the day I was going to install a new SSD RAID system for the Gmane news spool.  The old spool kinda ran full four months ago, but I kept deleting (and storing off-spool) the largest groups and waddled through.

I had one server that seemed like a good fit for the new news server: It had 8 physical disk slots.  But the motherboard only had six SATA connectors, so I bought an LSI MegaRAID SAS 9240-8i  card.

Installing the 2.5″ SSDs in a 3.5″ adapter.  Five of the disks
So screwed. I mean, so many screws
I decided to add 2x 4TB spinning mechanical disks to the remaining two slots for, uhm, a search index perhaps?
Oops. I forgot to film the unboxing
Ribbed for your coldness
A tower of SSDs
All seven disks installed in their Supermicro caddies

Look at that gigantic hole, yearning to be filled.
Uhm… these captions took an odd turn back there…
I pull the server out of the rack and totally pop its top
Look! Innards!
Err… is that a photo of the RAID card? I think it might be…
All wired up
The disk backplane had six slots already hooked up, so I viciously ripped three of the connectors out
And plugged five of the connectors from the RAID card back in
And then my biggest fans were reinstalled. Thank you thank you

Now the hardware was all installed, but getting to the LSI WebBIOS configuration was impossible.


I hit Ctrl+H repeatedly while booting, but I always just got the Linux boot prompt instead.  I binged around a bit, and it turns out that if any other non-RAID disks are present in the system, the WebBIOS won’t appear at all.

So I popped the three spinning disks out of the machine, and presto:


I configured the RAID5 over the five 1TB SSDs.  This would give me about 4TB, which is twice what the current news spool has.

However, building the RAID seems to take forever:


WTF?  2% in 20 minutes?

These are SATA3 SSDs.  Read/write speed is over 500MB/s.  That means that reading or writing a single disk should take under 30 minutes.  Since the card can access all the disks independently, computing the RAID5 XOR shouldn’t take more than that.

But let’s be generous.  Let’s say it has to read the disks sequentially, and write the parity disk sequentially.  That’s 2.5 hours.

Instead it’s going to take 16 hours.  WTF is up with that?  Does the LSI MegaRAID SAS 9240-8i have the slowest CPU ever or something?

That’s just unbelievably slow.

Diary, I’m going to let it continue building the RAID and then do some speed tests.  If it turns out that it’s this slow during operation, I’m going to rip it out and just do software RAID.

New Gmane SSDs

The Gmane news spool is 97% full, so I either had to delete some Gwene stuff, or buy more SSDs. Image

I bought more SSDs.  The current setup is 5x 512GB Samsungs in RAID5.  I bought 5x 1TB while in the US, so that gives us 2x the current size in RAID5, which should be enough for the next uhm five years? or so?

But the problem is how to do the switchover.  The last time it was pretty seamless.  I set up a new, spiffy machine with a spiffy hardware RAID controller. (See my secret diary for details.) Then synced all the articles over, and swapped some IP addresses at the end.

I really don’t want to buy another spiffy RAID controller this time.  But if I’m reusing the current hardware, there’s going to be downtime.

Here’s the plan:

1) Sync the spool over to an external SATA disk.

2) Take the server down, swap in the new SSDs, set up the RAID.

3) Rsync the articles from the external SATA disk to the RAID.

5) Profit!

2-5 will realistically take a day or so, with Gmane being totally dead while this is happening.  I think.

Hm…  or I could point the spool to the external disk while doing 3.  In read only mode.  Then there should only be a few hours downtime while I’m doing the RAID rebuild.  Hm.  Yes, I think that sounds doable…

So a few hours complete deadness, and a day or so in read-only mode.

But it’s a bit scary doing it this way.  If I were doing a completely separate new server, there would always be an easy way to roll back if things don’t work…


Software Sucks

The machine that runs my stereo is a seven year old Hush machine.  The last time I rebooted it, it kinda didn’t want to boot at all, except then it did anyway. 

So I’m thinking that it’s probably going off for the fjords one of these days.

To avoid some horrible music-less days I’ve bought a replacement machine.

It has the same stereo-ey form factor, which is nice.

Not a lot on the inside, but it has room for the PCI sound card, which is the important thing.  And no fans, of course.

Heat pipes!  It’s a 2.8GHz Ivy Bridge machine, so it has plenty of oomph.

It has an external PSU, but there’s so much room in the case that I just put it inside.

PCI riser will probably fit the sound card.

Anyway, since I had a new machine to play with, I connected it up to the CD ripping box to see whether I could get higher ripping speeds with this machine.

The computer says no.

So, basically, if I’m ripping straight from the SATA plugs on the main board, and I’m ripping a single CD, I get a ripping speed of 25x.

If I rip three CDs in parallel, I get 8x on each CD.

This is just pitiful.  Four years ago I got over 40x on each CD while ripping in parallel.  Something must have happened to the Linux kernel to make ripping from SATA be dog slow.

Anybody know what?

Too bad I can’t just install a four-year-old kernel.  It won’t support the new chipsets etc, so it just won’t work.

Linux sucks.