Adding a CSS File to WordPress

Of all things in the world that are frustrating, Googling for how to do things in WordPress is the absolute worst.

I guess I’m not used to, like, search for stuff that’s popular. Because whatever you search for related to WordPress, the top ten answers are from content farms that wants to sell you something, and in addition to being sleazy, their answers are all wrong.

So you have to resort to The Dark Web (i.e., page two of the Google search results) before you get to something that seems to half-way make sense… and then it doesn’t. Not really.

OK, here’s my issue: I wanted to edit the CSS of a WordPress site with Emacs. Because editing the CSS in a text box in a browser is miserable, horrible and no good. If you Google this, you’ll find out that there’s a bunch of plugins that… allows you to edit the CSS in a text box in a browser.

I just want a file somewhere! That I can edit!

So, on the Dark Web, I found somebody with a solution: Just put the extra code in the theme! Or put the code in functions.php! But won’t those changes be overwritten when WordPress is upgraded? Sure!

*sigh*

So after going to The Even Darker Web (page three), and putting four different answers together, I now have a sustainable solution: CSS in a file, loaded in a way that won’t be overwritten when you upgrade WordPress.

Here’s how. Create the directory /var/www/html/wp-content/plugins/site-css. Put the following in the file site-css.php inside that directory:

<?php
/*
Plugin Name: Site CSS
Description: Site specific CSS
*/

function add_custom_css() {
  wp_enqueue_style("site-css", "/wp-content/plugins/site-css/site-css.css");
}

add_action("wp_enqueue_scripts", "add_custom_css");

Go to the WordPress admin panel, and choose “Plugins”. You’ll see a new plugin there called “Site CSS”. Activate it.

That’s it! You can now edit the file called /var/www/html/wp-content/plugins/site-css/site-css.css to your heart’s delight via Tramp in Emacs and tweak those little rounded CSS corners until they look… just… right:

Did you find this blog on page seventeen in Google? You’re welcome!

unsilence

Hidden tracks on CDs used to be a pretty common thing. Not “real” hidden tracks: You could play tricks with the directory structure and put a track before the first one, so you have to skip back from 1 to get to 0.

No, the common way to do this is to pad the final track with, say, ten minutes of silence, and then the “hidden” song starts.

While the concept of hidden tracks is fun, in practice it’s really annoying, because it means that you’re sitting there listening to silence for ten minutes.

So once upon a time I wrote a little C program that would look for silences in the last track of all the CDs I have ripped, and if it finds (long) silences, then splitting happens.

As CDs are getting less popular as a means of distribution (to put it mildly), the hidden track thing has all but disappeared, too, but this week I got
the new Deathcrush album, and it employs this tactic.

I started looking around for the scripts to split the file, and then I discovered that it was in an obscure region of my home directory, not touched for ten years, and not put on Microsoft Github.

But now it is.

It’s probably a couple of decades too late to be useful to anybody, but there you go. It compiles and everything on the current Debian, which just amazes me. I mean, I wrote it in… like… 2003?

Go Linux.

(Not) HDR10 to sRGB

I’m going to be watching a bunch of 4K movies in High Dynamic Range (i.e., UHD HDR) later this year, and I’m going to be screenshotting a bit. Now, as you can see from that blog post, I’m using an HDMI splitter that sends the UHD HDR bits to the TV, and sends 2K SDR bits to the screenshotting box.

There’s a right way to do the SDR conversion and a wrong way, and the HDMI splitter does it the wrong way.

As that web page explains, if you just do the moral equivalent of

ffmpeg.exe -i input.mkv -vf select=gte(n\,360) -vframes 1 output.png

then you’re going to get washed-out images. HDR10 is a 10-bit BT.2020 colourspace, and we want to end up in the 8-bit BT.709 colour space.

And as that page tells us, you can do that in a pretty sensible way, or… you can just discard some bits and end up with a washed-out low-contrast image.

Here’s an example of what comes out of the SDR port:

You can do something simple like:

convert -contrast-stretch 0.20x0.10% IMG_53.JPG norm.jpg

and get something that looks acceptable:

But that’s obviously not the “correct” transform.

As Wikipedia explains, the HDR10 format uses a static non-linear transform in the Rec. 2020 colour space. The formula is here: It’s a “perceptual quantizer” (PQ). “PQ is a non-linear electro-optical transfer function (EOTF).” So there.

So! Here’s my question, that I also asked here:

Given that we have discarded some bits, there’s no way to get the ideal SDR version of these images. But: What has been discarded is predictable (I’m guessing the upper? lower? bits?), and all the HDR10 math stuff is static.

A helpful person on the ImageMagick forum suggests using a sigmoidal contrast stretch with lots of “auto” in the parameters, and that does give me pretty good results in all the test images, so it’s “good enough”.

But since all the things that have been done to the signal is static, there should be a way to write a static transform from these non-HDR10 images to SDR.

So, like, first convert back to 10-bit space (with zeroes for the missing bits?)

And then just do the inverse of the PQ EOTF, and then chop bits again into SDR, as this explains, using one of the nice algos there.

Surely this must be a fun math challenge for somebody (who isn’t me!).

Included below are a few more screenshots of movies and TV series that are HDR10 according to the TV.

For Flacs Sake

Yesterday, I bought this Black Cab EP off of Bandcamp, but when I played it today, all I got was silence.

A new form of Extreme Australian Minimalism or a bug?

My music interface is Emacs, and it uses flac123 to play FLAC files. It’s not a very er supported program, but I find it convenient since it uses the same command format as mpg123/321.

I have encountered FLAC files before that it couldn’t play, but I’ve never taken the time to try to debug the problem.

$ file /music/repository/Black\ Cab/Empire\ States\ EP/01-Empire\ States.flac 
/music/repository/Black Cab/Empire States EP/01-Empire States.flac: FLAC audio bitstream data, 24 bit, stereo, 44.1 kHz, 29760127 samples

Huh… so it’s 24 bits, while the rest of the FLAC files I’ve got are 16 bits?

Hm! Spot the problem!

Yes, if the format is anything other than 8 or 16 bits, then no samples are copied over from the FLAC decompression library to the libao player function, resulting in very hi-fi silent silence.

So this should be easy to fix, I thought: Just copy over, like, more bytes in the 24 bits per sample case, right?

Right.

But… the libao documentation is er uhm what’s the word oh yeah fucked up. It doesn’t really say whether ao_play expects three bytes per sample in the 24 bit case or four bytes. I tried all kinds of weird and awkward byte order manipulation, and got various forms of quite interesting noises squeaking out of the stereo, but nothing really musical.

So I wondered whether libao just doesn’t support 24 bits “natively”, and I added some “if 24 bits, then open as 32 bits” logic and presto!

Beautiful music! With so many bits!

I’ve pushed the resulting code to my fork on Microsoft Github.

The four people out there in the world playing 24 bit FLAC files on Linux from the command line or in Emacs: You’re welcome.

Z-Wave and Emacs

I’ve had a 433-MHz-based “home automation” system (i.e., light switches) for quite some time. It works kinda OK. That is, I’m able to switch the lights on and off, which is the main point.

But, man, the range of 433MHz devices sucks, including all Telldus models. I’ve been able to overcome the problems by having transmitters all over the apt, but getting wall-mounted light switches to work with any kind of reliability has proven impossible.

The problem is that the protocol is just inherently unreliable: It just sends commands out into the ether, and doesn’t have any replay logic or ACKs going on.

But there’s newer (but also old) tech available, and 433MHz devices are disappearing from the stores, and the winning protocol is Z-Wave.

So I got a Z-Stick:

It’s a nice device: You plug it in and it shows up as either /tty/USBx or /tty/ACMx (depending on the model, but it makes no difference), and you talk to it by squirting some bytes at it.

I had expected the protocol to be really well-defined and open, but it’s a proprietary protocol that people have been reverse-engineering for years, which led me to believe that there surely would be a nice repository somewhere that describes the protocol in detail, and has, say, an XML file that describes all the different network packages.

Nope.

But after some googling I found this gist that at least let me check whether I can talk to the device…

… and it works!

It turns out that the Z-Wave protocol is kinda nice. Each package has a checksum, and devices retransmit commands a few times unless they get an ACK, and Z-Wave plugged-in devices (like outlets) work as repeaters, so the Z-Wave network works as a mesh. It’s kinda cool.

There’s a ton of software to control these devices, but using something like Open HAB is just so… end userish.

Instead I wanted to just plug it into my existing Emacs-based system so that I don’t have to, like, use software. Software sucks.

If no machine-parseable spec is available, at least there must be some other sensible software out there that I can just crib implementation details from, right? So I binged “z-wave python”.

And just found Python OpenZwave, which turns out to be nothing but a wrapper around OpenZWave, which is a C++ library.

Whyyy.

It’s a simple protocol, really. You just read from a serial device and then squirt some bytes at the device. It’s not like you need to do Fourier realtime transforms on a vast byte stream or anything.

But who am I to criticise people for choosing odd programming languages to implement their free software? Since it’s C++, at least probably they created some kind of over-engineered monster where you have some XML files that define the protocol, and then they create objects from the stream and use a lot of polymorphism that’ll make the control flow impossible to follow (according to the “everything happens somewhere else” object oriented methodology), but that’s good, because I can just use the protocol definition files and ignore the rest.

Right?

Right?!?

No, the main work flow is based on nested if statements with lots of “switch( _data[1] )”. But surely after that they’ll parse the protocol packages into something sensible?

*sigh*

Well, at least they used variable names that’s understandable. But what’s data[4] and data[6]?

*sigh*

I”M SORRY! THIS IS…

Ok, again, I have no business giving a code critique of this library written by, I’m sure, very nice people and put on the intertubes for the world to peruse and use, and despite being written in the “least information density per pixel displayed” style, it’s clear and easy to follow, and has an impressive amount of comments. It’s still all kinds of wrong.

Perhaps it’s just coming from a different culture? It’s Windows-style programming?

I don’t know, but anyway, with the guidance from this excellent piece of, er, software, I was able to make Emacs parse and execute commands when I touch a wall switch.

And Z-Wave works! Where a couple my light switches were a bit hit and miss before, they now work with 100% reliability over the last week.

I don’t have any Z-Wave outlets yet, so I haven’t bothered to implement sending commands to devices, but I’m sure I’ll have to implement that at some point. But as far as I can tell, that should be pretty straight-forward. I foresee a lot of “but what’s _data[7]?” in my future.

Somebody should still create a Z-Wave repo with protocol definitions, especially since it’s now an open-ish standard. But only as PDFs, of course.

[Edit: I should have googled a bit more, because it pretty much looks like everything I wondered about is in the OpenHAB distribution, which is in Java and has more XML protocol definition files than you can shake a stick at. Well done, Java peeps.]

Twiddling Youtube; or, I mean, Innovations in Machine Learning

I mean, we’ve all been annoyed when we set up our USB monitor in our hallway that displays weather data, and then we decided to show videos from Youtube that somehow relate to the music that’s playing our apartment; we’ve dreamed of having something like the following catch our eyes when passing by on the way to the kitchen.

Oh, what a marvellous dream we all had, but then it turned out that most of the videos that vaguely matched the song titles turn out to be still videos.

So many still photo videos. So very many.

I mean, this is a common problem, right? Something we all have?

Right?

Finally I’m writing about something we all can relate to!

So only about five years after first getting annoyed by this huge problem, I sat down this weekend and implemented something.

First I thought about using the video bandwidth of the streaming video as a proxy for how much liveliness there is in a video. But that seems error prone (somebody may be uploading still videos in very HD and with only I-frames, and I don’t know how good Youtube is as optimising that stuff), and why not go overboard if we’re dipping our feet into the water, to keep the metaphor moist.

So I thought: Play five seconds of a video, taking a screenshot every second, and then compare the snapshots with ImageMagick “compare” and get a more solid metric, and I can then check whether bandwidth is a good proxy, after all.

The “compare” incantation I’m using is:

compare -metric NCC "$tmp/flutter1.jpg" "$tmp/flutter2.jpg" null:

I have no idea what all the different metrics mean, but one’s perhaps as good as another when all I want to do is detect still images?

So after hacking for a bit in Perl and Bash and making a complete mess of things (asynchronous handling of all the various error conditions and loops and stuff is hard, boo hoo, and I want to rewrite the thing in Lisp and use a state machine instead, but whatevs), I now have a result.

Behold! Below I’m playing a song by Oneohtrix Point Never, who has a ton of mad Youtube uploaders, and watch it cycle through the various hits until if finds something that’s alive.

Err… What a magnificent success! Such relevance!

Oh, shut up!

*mumble*

But let’s have a look at the data (I’m storing it using sqlite3 for convenience) and see whether videos are classified correctly.

I’m saying that everything that “compare” gives a rating of more than 0.95 is a “still image video”. So first of all we have a buttload of videos with a metric of 0.9999, which is very still indeed.

0.9999 yAZrDkz_7aY 36170
0.9999 yCNZVvP7cAE 150241
0.9999 yai4bier1oM 128630
0.9999 yt1qj-ja5yA 476736
0.9999 yxWzoYQb5gU 244076
0.9999 z1YKfu5sD24 723392
0.9999 z28HTTtJJEE 372014
0.9999 zOirMAHQ20g 574614
0.9999 zWxiVHOJVGU 70909

But the bitrates vary from 36kbps to 723kbps, which is a wide range. So let’s look at the ones with very low metrics:

0.067 slzSNsE7CKw 1359008
0.1068 m_jA8-Gf1M0 2027565
0.1208 7PCkvCPvDXk 1702924
0.1292 zuDtACzKGRs 3969219
0.1336 VHKqn0Ld8zs 1607430
0.1603 Tgbi3E316aU 1877994
0.2153 ltNGaVp8PHI 506771
0.2192 j14r_0qotns 683650
0.2224 dhf3X6rBT-I 1715754
0.2391 WV4CQFD5eY0 416458
0.2444 NdUZI4snzk8 2073374

Very lively!

These definitely have higher mean bitrates, but a third of them have lower bitrates than the highest bitrated (that’s a word) still videos, so my guess was right, I guess. I guess? I mean, my hypothesis has proven to be scientifically sound: Bitrates aren’t a good metric for stillness.

And finally, let’s have a peek at the videos that are around my cutoff point of 0.95 (which is a number I just pulled out of, er, the air, yeah, that’s the expression):

0.9384 t5jw3T3Jy70 802643
0.9454 5Neh0fRZBU4 1227196
0.9475 ygnn_PTPQI0 1907749
0.949 XYa2ye4GPY8 84848
0.9501 myxZM9cCtiE 1202315
0.9503 lkA9BRDWKco 297490
0.9507 mz91Z2aRJfs 203855
0.9512 IDMuu6DnXN8 358156
0.9513 bsFRMTbhOn0 198332
0.9513 v6CKHqhbos8 1686790
0.9514 3Y1yda0YfQs 1012911

Yeah, perhaps I could lower the cutoff to 0.90 or something to miss the semi-static videos, too, but then I’d also miss videos that have large black areas on the screen.

Hm… and there’s also a bunch of videos that it wasn’t able to get a metric on… I wonder what’s up with those.

1 pIBEwmyIwLA 349057
1 pzSz8ks1rPA 108422
1 qmlJveN9IkI 83383
1 srBhVq3i2Zs 1651041
1 tPgf_btTFlc 111953
1 uxpDa-c-4Mc 691684
1 uyI3MBpWLuQ 45383

And some it wasn’t able to play at all?

0 3zJkTILvayA 0
0 5sR2sCIjptY 0
0 E44bbh32LTY 4774360
0 FDjJpmt-wzg 0
0 U1GDpOyCXcQ 0
0 XorPyqPYOl4

Might just be bugs from when I was testing the code, though, and those are still in the database. Well, no biggie.

You can find the code on Microsoft Github, but avert your eyes: This is bad, bad code.

Anyway, the fun thing (for me) is that the video monitor will get better over time. Since it stores these ratings in the sqlite3 database and skips all videos with high metrics, I’ll wind up with all action all the time on the monitor, and the player doesn’t have to cycle through all the still-video guesses first.

See? The machine learns, so this is definitely a machine learning breakthrough.

I can haz mp4?

Let’s Encrypt was so ridiculously easy to install on my private web server that I wondered whether I could switch to mp4s for gifs. I mean, video snippets. I can’t do those directly on wordpress.com, because wordpress.com does not support controlling where mp4 videos appear in email posts.

So let’s try!

Did it work? Huh? Huh?

[Edit: It does seem to kinda work, but not in Chromium? And the snippets don’t autoplay, even if the WordPress pages says that’s supported. So once again, like for the nine thousandth time, I’m struggling with wordpress.com. I should move my blog off of this site and host it myself… *sigh*]