I maintain a site that list concerts in Oslo.
In Facebook’s continuing war on its users, the events API was discontinued without warning a month ago. (That is, they may allow access to some apps after doing an individual review, but somehow I suspect that allowing access to a service that tries to drive foot traffic to venues that use Facebook to host their calendars won’t be one of those special apps, because Facebook never wants anybody to leave Facebook ever, I think?)
About a quarter of the venues have their event listings on Facebook only, so that’s a rather big blow against having a useful concert listing site.
So I spent an evening reimplementing Facebook event web page scraping, and while doing that I started thinking about whether I should fancify my Concerts in Oslo web site. Scraping an image and a summary from the event pages didn’t seem insurmountable… Just find the largest image and the most coherent textual part of the HTML and there you are. (You have to filter out the “COOKIES EXIST! DID YOU KNOW THAT!” texts on most pages, because they’re often the longest texts, though.)
What took most work was trying to determine how this data should be loaded. In total, all the extra data is about 45MB, so just having it all in that initial table doesn’t really work. And I wanted to keep the data structures the same, so that the apps would also continue to work.
I first tried displaying the event summaries on hovering, but that was insanely annoying. Then I tried expanding the table when scrolling into view, and that was even more annoying, because things would move around a lot and you’d get confused.
UX is hard!
So I settled on pre-expanding the bottom border of each table line and then putting the event info in an absolutely-positioned div relative to the line. It’s a crime against CSS! But it works!
And now I don’t have to do any work on the site… until Facebook changes their HTML again.