Z-Wave and Emacs

I’ve had a 433-MHz-based “home automation” system (i.e., light switches) for quite some time. It works kinda OK. That is, I’m able to switch the lights on and off, which is the main point.

But, man, the range of 433MHz devices sucks, including all Telldus models. I’ve been able to overcome the problems by having transmitters all over the apt, but getting wall-mounted light switches to work with any kind of reliability has proven impossible.

The problem is that the protocol is just inherently unreliable: It just sends commands out into the ether, and doesn’t have any replay logic or ACKs going on.

But there’s newer (but also old) tech available, and 433MHz devices are disappearing from the stores, and the winning protocol is Z-Wave.

So I got a Z-Stick:

It’s a nice device: You plug it in and it shows up as either /tty/USBx or /tty/ACMx (depending on the model, but it makes no difference), and you talk to it by squirting some bytes at it.

I had expected the protocol to be really well-defined and open, but it’s a proprietary protocol that people have been reverse-engineering for years, which led me to believe that there surely would be a nice repository somewhere that describes the protocol in detail, and has, say, an XML file that describes all the different network packages.

Nope.

But after some googling I found this gist that at least let me check whether I can talk to the device…

… and it works!

It turns out that the Z-Wave protocol is kinda nice. Each package has a checksum, and devices retransmit commands a few times unless they get an ACK, and Z-Wave plugged-in devices (like outlets) work as repeaters, so the Z-Wave network works as a mesh. It’s kinda cool.

There’s a ton of software to control these devices, but using something like Open HAB is just so… end userish.

Instead I wanted to just plug it into my existing Emacs-based system so that I don’t have to, like, use software. Software sucks.

If no machine-parseable spec is available, at least there must be some other sensible software out there that I can just crib implementation details from, right? So I binged “z-wave python”.

And just found Python OpenZwave, which turns out to be nothing but a wrapper around OpenZWave, which is a C++ library.

Whyyy.

It’s a simple protocol, really. You just read from a serial device and then squirt some bytes at the device. It’s not like you need to do Fourier realtime transforms on a vast byte stream or anything.

But who am I to criticise people for choosing odd programming languages to implement their free software? Since it’s C++, at least probably they created some kind of over-engineered monster where you have some XML files that define the protocol, and then they create objects from the stream and use a lot of polymorphism that’ll make the control flow impossible to follow (according to the “everything happens somewhere else” object oriented methodology), but that’s good, because I can just use the protocol definition files and ignore the rest.

Right?

Right?!?

No, the main work flow is based on nested if statements with lots of “switch( _data[1] )”. But surely after that they’ll parse the protocol packages into something sensible?

*sigh*

Well, at least they used variable names that’s understandable. But what’s data[4] and data[6]?

*sigh*

I”M SORRY! THIS IS…

Ok, again, I have no business giving a code critique of this library written by, I’m sure, very nice people and put on the intertubes for the world to peruse and use, and despite being written in the “least information density per pixel displayed” style, it’s clear and easy to follow, and has an impressive amount of comments. It’s still all kinds of wrong.

Perhaps it’s just coming from a different culture? It’s Windows-style programming?

I don’t know, but anyway, with the guidance from this excellent piece of, er, software, I was able to make Emacs parse and execute commands when I touch a wall switch.

And Z-Wave works! Where a couple my light switches were a bit hit and miss before, they now work with 100% reliability over the last week.

I don’t have any Z-Wave outlets yet, so I haven’t bothered to implement sending commands to devices, but I’m sure I’ll have to implement that at some point. But as far as I can tell, that should be pretty straight-forward. I foresee a lot of “but what’s _data[7]?” in my future.

Somebody should still create a Z-Wave repo with protocol definitions, especially since it’s now an open-ish standard. But only as PDFs, of course.

[Edit: I should have googled a bit more, because it pretty much looks like everything I wondered about is in the OpenHAB distribution, which is in Java and has more XML protocol definition files than you can shake a stick at. Well done, Java peeps.]

Twiddling Youtube; or, I mean, Innovations in Machine Learning

I mean, we’ve all been annoyed when we set up our USB monitor in our hallway that displays weather data, and then we decided to show videos from Youtube that somehow relate to the music that’s playing our apartment; we’ve dreamed of having something like the following catch our eyes when passing by on the way to the kitchen.

Oh, what a marvellous dream we all had, but then it turned out that most of the videos that vaguely matched the song titles turn out to be still videos.

So many still photo videos. So very many.

I mean, this is a common problem, right? Something we all have?

Right?

Finally I’m writing about something we all can relate to!

So only about five years after first getting annoyed by this huge problem, I sat down this weekend and implemented something.

First I thought about using the video bandwidth of the streaming video as a proxy for how much liveliness there is in a video. But that seems error prone (somebody may be uploading still videos in very HD and with only I-frames, and I don’t know how good Youtube is as optimising that stuff), and why not go overboard if we’re dipping our feet into the water, to keep the metaphor moist.

So I thought: Play five seconds of a video, taking a screenshot every second, and then compare the snapshots with ImageMagick “compare” and get a more solid metric, and I can then check whether bandwidth is a good proxy, after all.

The “compare” incantation I’m using is:

compare -metric NCC "$tmp/flutter1.jpg" "$tmp/flutter2.jpg" null:

I have no idea what all the different metrics mean, but one’s perhaps as good as another when all I want to do is detect still images?

So after hacking for a bit in Perl and Bash and making a complete mess of things (asynchronous handling of all the various error conditions and loops and stuff is hard, boo hoo, and I want to rewrite the thing in Lisp and use a state machine instead, but whatevs), I now have a result.

Behold! Below I’m playing a song by Oneohtrix Point Never, who has a ton of mad Youtube uploaders, and watch it cycle through the various hits until if finds something that’s alive.

Err… What a magnificent success! Such relevance!

Oh, shut up!

*mumble*

But let’s have a look at the data (I’m storing it using sqlite3 for convenience) and see whether videos are classified correctly.

I’m saying that everything that “compare” gives a rating of more than 0.95 is a “still image video”. So first of all we have a buttload of videos with a metric of 0.9999, which is very still indeed.

0.9999 yAZrDkz_7aY 36170
0.9999 yCNZVvP7cAE 150241
0.9999 yai4bier1oM 128630
0.9999 yt1qj-ja5yA 476736
0.9999 yxWzoYQb5gU 244076
0.9999 z1YKfu5sD24 723392
0.9999 z28HTTtJJEE 372014
0.9999 zOirMAHQ20g 574614
0.9999 zWxiVHOJVGU 70909

But the bitrates vary from 36kbps to 723kbps, which is a wide range. So let’s look at the ones with very low metrics:

0.067 slzSNsE7CKw 1359008
0.1068 m_jA8-Gf1M0 2027565
0.1208 7PCkvCPvDXk 1702924
0.1292 zuDtACzKGRs 3969219
0.1336 VHKqn0Ld8zs 1607430
0.1603 Tgbi3E316aU 1877994
0.2153 ltNGaVp8PHI 506771
0.2192 j14r_0qotns 683650
0.2224 dhf3X6rBT-I 1715754
0.2391 WV4CQFD5eY0 416458
0.2444 NdUZI4snzk8 2073374

Very lively!

These definitely have higher mean bitrates, but a third of them have lower bitrates than the highest bitrated (that’s a word) still videos, so my guess was right, I guess. I guess? I mean, my hypothesis has proven to be scientifically sound: Bitrates aren’t a good metric for stillness.

And finally, let’s have a peek at the videos that are around my cutoff point of 0.95 (which is a number I just pulled out of, er, the air, yeah, that’s the expression):

0.9384 t5jw3T3Jy70 802643
0.9454 5Neh0fRZBU4 1227196
0.9475 ygnn_PTPQI0 1907749
0.949 XYa2ye4GPY8 84848
0.9501 myxZM9cCtiE 1202315
0.9503 lkA9BRDWKco 297490
0.9507 mz91Z2aRJfs 203855
0.9512 IDMuu6DnXN8 358156
0.9513 bsFRMTbhOn0 198332
0.9513 v6CKHqhbos8 1686790
0.9514 3Y1yda0YfQs 1012911

Yeah, perhaps I could lower the cutoff to 0.90 or something to miss the semi-static videos, too, but then I’d also miss videos that have large black areas on the screen.

Hm… and there’s also a bunch of videos that it wasn’t able to get a metric on… I wonder what’s up with those.

1 pIBEwmyIwLA 349057
1 pzSz8ks1rPA 108422
1 qmlJveN9IkI 83383
1 srBhVq3i2Zs 1651041
1 tPgf_btTFlc 111953
1 uxpDa-c-4Mc 691684
1 uyI3MBpWLuQ 45383

And some it wasn’t able to play at all?

0 3zJkTILvayA 0
0 5sR2sCIjptY 0
0 E44bbh32LTY 4774360
0 FDjJpmt-wzg 0
0 U1GDpOyCXcQ 0
0 XorPyqPYOl4

Might just be bugs from when I was testing the code, though, and those are still in the database. Well, no biggie.

You can find the code on Microsoft Github, but avert your eyes: This is bad, bad code.

Anyway, the fun thing (for me) is that the video monitor will get better over time. Since it stores these ratings in the sqlite3 database and skips all videos with high metrics, I’ll wind up with all action all the time on the monitor, and the player doesn’t have to cycle through all the still-video guesses first.

See? The machine learns, so this is definitely a machine learning breakthrough.

I can haz mp4?

Let’s Encrypt was so ridiculously easy to install on my private web server that I wondered whether I could switch to mp4s for gifs. I mean, video snippets. I can’t do those directly on wordpress.com, because wordpress.com does not support controlling where mp4 videos appear in email posts.

So let’s try!

Did it work? Huh? Huh?

[Edit: It does seem to kinda work, but not in Chromium? And the snippets don’t autoplay, even if the WordPress pages says that’s supported. So once again, like for the nine thousandth time, I’m struggling with wordpress.com. I should move my blog off of this site and host it myself… *sigh*]

Innovations in Web Scraping

I added event descriptions to my Concerts in Oslo a few months back. It mostly worked kinda OK, but it’s using heuristics to find out what “the text” is, so it sometimes includes less-than-useful information.

In particular, those fucking “YES IT KNOW IT”S A FUCKING COOKIE!” texts that all fucking web sites slather their pages with now fucking get in the way, because those texts are often a significant portion of the text on any random page. (Fucking.)

But I added filtering for the those bits, and things looked fine.

Yesterday I was told that all Facebook events are basically nothing but that cookie warning (in Norwegian), and that’s because the Facebook event pages now contain nothing but that text, plus some scaffolding to load the rest as JSON:

To build the Facebook event page, about 85 HTTP calls are done and 6MB worth of data is loaded.

I contemplated reverse-engineering the calls to get the event description via the graphql calls (since Facebook has closed all access to public events via their API), but then it struck me: The browser is showing me all this data, so perhaps I could just point a headless browser towards the site, and then ask it to dump its DOM, and then I can parse that?

Which I’ve now done.

I know, it’s probably a common technique, but I’d just not considered it at all. A mental block of some kind, I guess. I’m so embarrassed. Of course, it now takes 1000x longer to scrape a Facebook event than something that just puts the event descriptions in the HTML, but whatevs. That’s what you have caches for.

I’m using PhantomJS, and it seems to work well (even if development has been discontinued). PhantomJS is so easy and pleasant to work with that I think I’ll try to stick with it until it disappears completely. Is there another headless browser that’s as good? All the other ones I’ve seen are more… enterprisey.

Let It Snow

I wanted to make the Carpenter series of posts look ridiculously romantic, so I got the swashiest font I could find.  But it’s not enough: I wanted to make it snow, too.

Now, this blog is on WordPress.com, which adds limitations to what is easy or even possible to do.  I wanted a CSS-only snowing solution that didn’t involve adding any new HTML elements, and that turns out not to be the common thing to do?  There are approximately five hundred thousand blog articles out there about making web pages snow, but they are either snowing in the background, or is a mess of <div> <div> <div>s to make it snow in front of an image.

Looking at this pretty snowing effect, I wanted to do the same, but by just adding a class: snow to the <a> element already surrounding the <img>s in this blog.

Presto:

This, as anybody who’s done CSS knows, took way too long and went down many blind alleys before I got it to work properly.  (So it’s a good thing I did it while I was on holiday.)  Here’s the CSS I ended up with:

.snow {
  display: inline-block;
  position: relative;
}
.snow::after {
  content: '';
  position: absolute;
  display: block;
  top: 0; left: 0;
  width: 100%;
  height: 100%;
  background-image: url('https://larsmagne23.files.wordpress.com/2017/11/pl2.png'), url('https://larsmagne23.files.wordpress.com/2017/11/pm2.png'), url('https://larsmagne23.files.wordpress.com/2017/11/pm2.png'), url('https://larsmagne23.files.wordpress.com/2017/11/ps2.png'), url('https://larsmagne23.files.wordpress.com/2017/11/ps2.png');
  animation: snow-fall 5s linear infinite;
}
@keyframes snow-fall {
  0% { background-position: 0 0, 0 0, 30px 40px, 0 0, 10px 0;}
  33% { background-position: 0 137px, 0 70px, 45px 100px, 0 30px, 20px 30px; }
  66% { background-position: 0 274px, 0 140px, 35px 200px, 0 60px, 20px 60px; }
  100% { background-position: 0 413px, 0 200px, 30px 240px, 0 100px, 10px 100px; }
}

So: What’s going on here is that I’m adding an :after element to the <a> element, and that element has en empty content, but several background images.  (Five of them, to be precise.)  These images are mostly transparent, but has snowflakes of various sizes and blurrinesses.  In the original version, all these “layers” are of the same size but animated in different speeds so that the closest layer is fastest.  That’s not possible with the :after thing, because there can only be one :after, and therefore only one animation speed.

So instead I chopped the different layer images into different sizes, and then I can animate one from 0px to 100px while I animate a closer, faster layer from 0px to 200px, and so on.  The important thing is that the animations have to be the same sizes as the images, because otherwise you’ll get a shuddering effect when the animations restart.

And then you can do any number of things, like adding some slight wobble and windiness to the scene.

The more layers you add, the more CPU intensive the result will be.  Depending on whether the user’s browser uses hardware acceleration for the CSS animation, of course.  But think of it this way: It makes the computer nice and toasty warm: Perfect for winter.

(One complication to getting this to work on this blog is that I had called the animation “snow”, so the CSS read “animation: snow 5s linear infinite”.  WordPress.com helpfully auto-translated this to “animation: #fffafa 5s linear infinite”.  Presumably because “snow” is a colour name.  And that doesn’t work.  Thank you, WordPress.com.)

So when I grow up, my job is definitely going to be for restaurants that specialise in weddings in December.  I’ve already got the CSS and the font!

Responsive Comics

The other month I was staring at the Diamonds Previews interface that I hacked up last year. Its main purpose is to allow me (and anybody else) to go through the monthly listings rapidly, without all that clicking and stuff.

I was wondering: Has CSS Flexbox technology progressed to the point where the interface could be transformed “responsively” (i.e., via CSS selectors) from the wide design above to something that fits on a cell phone. It would require a completely different layout; shifting from the three column layout with many sub-boxes into a single column where some of the boxes would move up and some down and some become a line of buttons here and and and…

The answer seems to be: Nope. While googling for this stuff, everybody seemed to be saying “just add a div outside the other div and then div it up and then you can sort of move some bits around. If that div is placed before that div”.

CSS still, after 20 years, sucks at layout.

*sigh*

But once I had started tinkering with this, I couldn’t just give up, so I just wrote a bunch of JS to transform the layout, and presto:

So purdy! So UX!

And so I started wondering whether this might make sense as an app, so I wrapped it up in Cordova and shipped it over to Google…

Who rejected it outright because of copyright violations. “But,” I said, “this is like a sales catalogue and isn’t it fair use to show covers in a sales catalogue, man? Man?” And they said “nope; go away”.

While waiting for them to reject the app, I started thinking about… sharing… “Wouldn’t it be nice to make it possible for people to ‘curate’ lists and share these with others?”, so I read up on Firebase and presto: “Curate” button.

Firebase is surprisingly nice, and has a lot of documentation. The main problem is that Firebase covers so many, many use cases that trying to find the correct approach for Goshenite entailed scratching my head for a few hours.

But when the app rejection arrived I just thought, “eh, whatevs”, so it’s a bit lacking in features that are probably not going to be implemented, so it’s more of a toy than anything.

The source code can be found on Github, as usual.

“Concerts in Oslo” App Updated

I took a short holiday to sit in the garden and update the Concerts in Oslo app.  I mainly wanted to make navigation more intuitive by having the “back” button do what you’d expect it to do, but I also wanted to play with the Google Map API and see whether that’s any fun.
And it is.  Results to the right.  I’ve also added a method to list concerts in descending proximity.  You know.  For those days when you’re thinking “I want to go to a concert; I don’t care which one, but it has to be close.  Because I’m too tired to walk far.”
THIS MAKES SENSE!
The Android version is out now; the IOS version will follow once I’ve tested it on the phone I forgot to bring with me.  So a couple of days plus the nine weeks Apple will use to approve the update.
But one can’t post a blog post like this without bitching about Google, can one?  I don’t think so.  First of all, the Google Play Console defaults to the dominant language of the IP address you’re connecting from, which relegates all developers from non-English-speaking countries to third class status: We’re presented with awkwardly translated tech speak that barely made sense in English in the first place.  And it’s impossible to google for any of these messages and errors you’re inevitably presented with to find out what they mean, because all those questions and answers are in English.
And there’s no way to switch to English…  until you notice that the URL itself has a parameter that says “hl=no”, and you can edit that to “hl=en”, and then the interface will behave and become marginally more understandable.
Not very, though: I seem to have pushed an API version of 23, which excludes all pre-version-5 Android users from using the app.  And there seems to be no way to go back to API version 14, SDK 23, which I was using.  Play Console gives me errors, at least, when I try.
*sigh*
I’ll just leave you all with this unrelated screen that Android displays when I plug my phone into the laptop:
If you press “Cancel” here…  is it going to cancel the charging?  Or not?  I’ll leave that as an exercise for the class.