design: a Black Lives Matter poster

It is the least I could do; downloading this PDF and printing out the first three pages is the least you can do.

I made it in the free and open source LibreOffice program, using the fonts Cantarell Extra Bold and Dobkin Script , they are free to install. I filled in the ‘v’ in “Lives” using the free and open source Inkscape program.

Posted in design | Leave a comment

Google Play Music: 18 million songs and no respect

I signed up for Google Play Music All Access (Google marketing managers are incompetent at naming) the week it was announced, back in the good old days when Google’s motto was “Do no evil” and every month they brought exciting advances in the power of the web. For the $7.99 introductory offer you could listen to 18 million songs! Access to nearly every song changes a music fan’s life; hear something you like, identify it with Shazam, then dig as deep as you care. When Google introduced its cute Chromecast Audio puck and I could play all those songs in pretty high quality on audio equipment, the experience got even better.

When Google repeatedly extended YouTube with Red/Plus/Music/blahblah alternatives, I mostly ignored its half-assed attempts to turn music listening into random video playlist watching, but I got the premium version for free with the fantastic benefit of no YouTube commercials ever! All in all, GPMAA is the greatest $107.88 a year I spend.

But 18 million done badly is not everything

Except…. it isn’t access to everything. I knew Prince aka The Artist Formerly Known as Prince had a love/hate relationship with digital music and streaming, so I expected his catalog might be less available, along with other streaming holdouts like Bob Seger. But the random undocumented omissions in Google Play Music All Access are intermittently infuriating.

example: Unforgettable, but album amnesia

The first time I realized how bad it is was when I was looking for Nat King Cole and found most of his albums unavailable, then tried searching for his time-travelling duet with daughter Natalie. Her album Unforgettable… with Love is available, but not the eponymous track where she duets with Dad! Fine, whatever dispute Google has over Nat King Cole’s catalog extends to this duet. But the song simply doesn’t appear in Google Play Music’s track listing for the album! Don’t f***ing lie to me about which tracks are on an album!

Here’s another example, the immortal Blues Brothers Original Soundtrack Recording. According to GPM, these 7 tracks are the entire record. There’s a hint of the problem with missing track 6 (the gospel choir singing “The Old Landmark”), but all the songs from the ending concert are gone! No Cab Calloway singing “Minnie the Moocher,” no “Sweet Home Chicago,” no “Jailhouse Rock.” It’s an 11-track album. What the hell?!

This is not a track listing of the album!

example: Andy Summers creativity castration

After listening all the way through the Police’s oeuvre (four exceptionally good albums, one short of the 5-album cutoff for eligibility for “immortal run” status), I wanted to continue with their solo careers, starting with guitarist Andy Summers (a better Edge than the Edge). I remember reading a favorable review of his album titled The Golden Wire or something, but at the time I never heard it on the radio and wasn’t about to buy it unheard (kids of today, we had it so hard before the Internet). So go to Google Play Music, search for Andy Summers, view All albums, … no indication of such an album. Read his Wikipedia article, there it is in 1989. It’s not obscure, it’s a central part of his artistic output. Don’t f***ing lie to me with a list of All albums of an artist that isn’t all albums!

It is awful that Google Play Music silently omits the songs and albums that it doesn’t have rights to sell or stream. “Our company mission is to organize the world’s information and make it universally accessible and useful.” So do it, you lazy f***ers!

(from How Google Search Works | Our Mission)

Similarly, Andy Summers’ collaboration I Advanced Masked with Robert Fripp on A&M is unavailable and unmentioned. If I know the album title and search for it, GPM shows links to YouTube videos that are probably illegal uploads by well-meaning fans, but I want to know that they collaborated and released an album. GPM’s presentation of music information is insultingly incomplete.

But no respect

When I search for a song by an artist, I expect the first result to be the song from the album on which it was released. That’s where it all began, that’s what I care about, that’s where Google provides some useful information (often it’s the opening section of the album’s Wikipedia article). Instead GPM will randomly show me the song on garbage “Best of the NNN0s” compilations, movie soundtracks, sad live bootlegs, all the artist’s greatest hits albums, karaoke versions, and cover bands. Everything but the original album! I wind up having to search Wikipedia or Discogs to find the album title, then search for that, then click the album, then find the song.

not one of these is a studio album by The Spinners! (and the original album is not in the “95 more”!)

Metadata wrong all over

Mayer Hawthorne 'Man About Town' album in GPM with conflicting year of release
Wikipedia editors know it’s a 2016 album, Google Play Music is confused.

Google frequently has the date of releases wrong. Supposedly it gets this info from the record companies, so it’s not their fault, but music web sites get this info right. Google is happy to reuse Wikipedia content about artists and albums, but it can’t be bothered to have deeper integration with sites that know more about albums.

“OK Google, what’s a botched remastering?”

Google Play Music doesn’t even pretend to care about different remasterings of albums. When you find an album, Google’s preference is to show the latest remaster it can lay its hands on, despite the disaster of the loudness war: albums remastered and remixed to sound punchier on the radio.

When there are multiple versions of an album, GPM’s presentation is poor. Often it will present two or more identical thumbnails of an album including the deluxe version or the 25th anniversary re-release, but you can’t tell which is which without visiting each album in turn. Sometimes two albums are indistinguishable.

Google Play Music is dying anyway

I’ve been meaning to moan about Google Play Music All Access misfeatures for years. I’m finally doing so as Google announces it’s killing the product. Already you can’t buy digital songs on it any more. Google will force everyone to YouTube Music, and the lamentations are disheartening. Unlike some subscribers, I think I have local copies of all the digital music files I uploaded to GPM, mostly in the 2000s when I would buy “singles” on GPM and Amazon, and artists’ web sites would offer MP3 downloads of obscure tracks. But why put up with Google’s shenanigans if there are better alternatives? Now would be a perfect opportunity to jump ship to a better music streaming service that respects musical artistry and I hope pays more than a pittance for each song I listen to. Qobuz is an obscure music streaming service that offers higher-resolution tracks (more important for better mixing than actual increased fidelity that you can hear), and it integrates with Roon‘s music playing software (another darn blog article I should write). However, it will hurt to give up ad-free YouTube video watching. Even more monthly subscription fees are in my future…

Posted in music, software | Leave a comment

music: wondering at Stevie Wonder

Play Stevie Wonder’s immortal run of albums – Music of My Mind, Talking Book, Innervisions, Fulfillingness’ First Finale, Songs in the Key of Life – and you will be repeatedly floored by his artistry and talent. What brings tears of joy are the elements I’d forgotten amongst the greats: the perfect rainy-day funk of “Tuesday Breakup”, the burning vocals in “It Ain’t No Use,” the zOMG what did he just do chord changes in the B melody of “Please Don’t Go,” the Nokia ringtone teleported into “All Day Sucker,” …

This Slate article is emphatic: “arguably the greatest sustained run of creativity in the history of popular music.” Is it “greater” than Joni Mitchell’s run, or Elvis Costello’s first five albums, or the Beatles’ lighting the rocket engines around the release of Rubber Soul? The obvious answer is they’re incomparable in both senses of the word.

But I’ll give it a go. Stevie Wonder’s lyrics can’t compete with Joni or Elvis, they’re at best direct expressions of emotions but often convoluted without strong wordplay. Co-producers Robert Margouleff and Malcolm Cecil on the first four are deservedly famous for advancing synthesizers with their T.O.N.T.O. system and use of synthesizers for bass, strings, harmonies – everything but drums.

Rhythm, not drums

Stevie Wonder is obviously outrageously talented on keyboards, harmonica, and singing. It’s easy to overlook his drumming; he’s not deeply in the pocket, or super-heavy, or flashy. He can ride the hi-hat like a disco drummer, but his drumming doesn’t propel the song, it’s another rhythmic element subservient to musical ideas. Stevie gets to play drums and Moog bass and percussive keyboards, so no one instrument has to drive.

Songs in the Key of money

I bought Innervisions and Fulfillingness’ First Finale when they came out. Re-listening, I forgot how bleak Innervisions is; Stevie Wonder moved away from love songs and heartache songs to look around, and he was distressed by what he saw under the presidency of Richard Nixon.

When Songs in the Key of Life came out as a double album with at first an additional bonus 7-inch EP, I balked. $13.98 was a lot of money! Also some low-talent British singer re-made “Isn’t She Lovely” as his own mawkish single when Stevie Wonder was unwilling to shorten the song, and BBC Radio 1 stupidly played this over and over instead of the far superior original album track. Over time I grew familiar with the towering songs, including “As” and “If it’s Magic” because friends had the double album. Listening to it on a streaming service, the additional tracks from the bonus single are a revelation. “All Day Sucker” is unlike anything Stevie Wonder did, and “Saturn” is trippy. And the amount of time and care lavished on the record is incredible:

Nonstop sessions stretched across two-and-a-half years, two coasts, and four studios: Crystal Sound in Hollywood, New York City’s Hit Factory, and the Record Plant outposts in Los Angeles and Sausalito. More often than not, he could be found in one of those spaces, sometimes for 48 hours at a time, chasing his muse with a rotating crew of engineers and support musicians. Over 130 people were involved in the recording, including Herbie Hancock, George Benson, “Sneaky Pete” Kleinow and Minnie Riperton. “If my flow is goin’, I keep on until I peak” became Wonder’s mantra.

Inside Stevie Wonder’s Epic ‘Songs in the Key of Life’

Although there are exceptionally talented songwriters and musicians today, same as it ever was, … We shall never see its like again.

Posted in music | Leave a comment

music: Trevor Horn and the Buggles in 1979

I revere producers as much as musicians and songwriters. I was dimly aware of producers, starting with the mysterious “produced by Bones Howe” in big letters on the back of some record… I thought it was the Carpenters but now I can’t find it. What really piqued my interest was Chic’s in-your-face credit on most of their early albums:

Composed, produced, arranged, conducted, and performed by Nile Rodgers and Bernard Edwards for the CHIC Organization, Ltd.

and then following all the records Chic and Nile and ‘Nard produced. It’s a joy to revisit a classic song, check the credits, and realize “Wait, that’s yet another great song produced by…” such as unheralded Alan Tarney (who liked his own song “Once in a While” so much he produced it on two different records) or the almighty Arif Mardin. Then you can lose yourself in Wikipedia and Discogs finding all their production credits.

And so to Trevor Horn, the bass player, singer, video (killed the radio) star, and maximalist producer. As a producer he’s probably most famous for his work with Frankie Goes to Hollywood (1984 strikes again!), and my favorite, his spectacular production for ABC’s The Lexicon of Love (when producer full of ideas meets hungry band really going for it, and the magic happens). The guitars on the latter sound 10 feet tall on a great stereo.

But there’s a lot of prehistory to Trevor Horn. Listening to the deluxe reissue of the legendary Dusty in Memphis by the great British pop-soul singer Dusty Springfield reminded me of her unexpectedly solid dance single that I bought on vinyl years ago… turns out it was written by Trevor Horn, his partner in Buggles Geoff Downes, and Bruce Woolley.

1979 was certainly Buggles’ annus mirabilis. They also made the “Caribbean Air Control” single as Big A, made the impressively weird space disco-pop “Star to Star” album as Chrome, then released “Video Killed the Radio Star” first by Bruce Wooley & the Camera Club and then it broke big as Buggles. Then only 10 days into 1980 Buggles released “The Age of Plastic” album. That’s quite something, it’s about as productive as Nile Rodgers & Bernard Edwards of Chic were at the same time.

Posted in music | Leave a comment

computers generating art

First came text

GPT-2 and other AI (Artificial Intelligence) Transformer models are impressive. After you hand the AI an enormous corpus of text, you give it some text and it continues, generating character after character that make up recognizable, even creative, language maintaining the style. Here’s how, an online GPT-2 system, continued when I fed it the words in bold:

“The AI ruined my life” muttered Denise.

“You are the target of someone else’s revenge” replied Cale.

“And you did nothing to prevent it”

“Your words…”

“Yes, that is what I say” smiled Cale.

“So the boys are more than just a valuable asset”


“But I have things to tell you guys now”


Cale looked at it. “If we were to adopt our normal lives, we wouldn’t be able to afford everything we want in this world. So for us, we must keep working. It’s this effect of working hard that we were able to obtain everything, even the rare

It’s not simply regurgitating matching phrases that it has ingested. In compressing down the 40 gigabytes of text it was fed into some kind of internal representation in its neural network of only 1.5 billion parameters, it has noticed certain features of what you give it and they trigger… something… in its neural network that riffs on what you provided. Give it the same starting text again and it goes off in a different direction.

“The AI ruined my life” muttered Denise.

“Ha! You’re quite an interesting girl. Like a puppet.”

A sorcerer suddenly appeared next to the table and pointed his wand at Denise, saying “Use your magic!”

A short spell field with dazzling golden lights appeared above Denise’s head and soon a single tablet appeared on the table and began to write. Then the tablet turned around and charged towards the sky with a radiant light.

The tablet charged forwards quickly!

The intensity of the light was frightening. Once it passed the boundary of the spell field, it became all-encompassing. The tablet was so bright that the color seemed to change from a golden to a blue, and not only did it look beautiful,

And again:

“The AI ruined my life” muttered Denise. “I looked over at Bobby, who was sitting with his back against the wall and moving his head side to side. He said nothing. His mouth was hanging open. I then turned back to the door and said, “What the hell is going on?” The door opened and I was met by my father, who had been in hiding for about three weeks and was waiting for me at the door. Bobby was still sitting there, looking over at his father. But in this instance the looks were no longer new. Instead, they had become familiar, like a sick man watching a recently emerged corpse of his loved one. His face turned beet red and he said, “You didn’t make my greatest

Transformer models don’t just continue writing in almost any style. With additional fine-tuning they can recognize a question and answer format, or a simple math problem, or a multiple choice quiz, or a request to summarize, … and continue with the answer to the problem better than most humans. And the newest GPT-3 (eleventy billion parameters in the model! fed a trillion words! gargantuan PDF paper!) can do all these without any fine-tuning! It’s ingested so much text that if you give it one or a few examples of what you want it will figure out what you’re asking for, just as a kid can participate in a made-up game without having to go to classes in that game. The following interaction, getting it to use a made-up word, is amazing to me:

To do a “farduddle” means to jump up and down really fast. An example of a sentence that uses the word farduddle is:
One day when I was playing tag with my little sister, she got really excited and she started doing these crazy farduddles.

It’s “merely” responding to input, but be honest, that’s all you’re doing when someone asks “How are you?” or “What day is it?”

It’s been my hope for decades (my thoughts in 2006, 2010) that some AI would gain enough smarts to understand language, then overnight it would ingest every document on the Internet and be the smartest thing in the world. Instead, AI researchers force-feed a huge subset of the Internet into a language model and it “does language” extremely well without understanding what it’s doing or what it all means.

OK so music…

You can apply a similar approach to music. Train a transformer on the musical note instructions in MIDI files, and then give it some starting parameters, and it can generate further musical instructions. OpenAI built such a system, called MuseNet. Here is what transpired when world-unfamous producer skierpage told MuseNet to improvise in the style of Disney starting from Beethoven’s Für Elise. The piano continues well enough, but then the meth-addled drummer comes in from another planet and goes nuts, and then it ends with a piano flourish. I can’t imagine a human being coming up with this.

OpenAI has now moved on to generating actual waveforms of music with its new system, called Jukebox. I think the main motivation is it can generate someone singing lyrics as well as the instrumental performances. This is crazy. It learns to compress digital music files at 44,000 samples a second down to a much smaller compressed representation that only it understands, and then if you ask for music in some style it creates music in that compressed representation and “blows it back up” into millions of samples making up a musical waveform.

Here’s “Rock, in the style of Elvis Presley.”

It’s weird, like a broken radio tuning into a performance by a rock and roll garage band in love with Elvis but they only heard his songs on their own broken radio. And the AI has learned that Elvis was frequently interrupted by crowd noises and cheering, so after a while it throws that in too.

The lyrics on this are confusing, OpenAI says “All the lyrics below have been co-written by a language model and OpenAI researchers.” But if you want crazy lyrics, someone found Jukebox’s continuations of Rick Astley’s legendary meme song. Jukebox mostly trundles along in that inimitable 80s Stock-Aitken-Waterman style, sometimes adding some novel production ideas or a keyboard solo just like the original producers would mete out new ideas while sticking to the format. But its muffled lyrics include at 1:37 “you wouldn’t get this spaghetti on a guy… Stretch my 🍆😂. Later Rickbot goes bleak: 1:53 “Kiss the boat Denny I’m Satan’s pirate arrr”, and 3:53 “You know the rules and so you have to die.”

To stress the same point as the text generator, the AI isn’t simply pasting in bits of music that it has stored matching the starting music. Instead it is calling on the… vague memories/regularities/something… that it has gleaned from ingesting “1.2 million songs (600,000 of which are in English), paired with the corresponding lyrics and metadata from LyricWiki” to produce something new yet familiar. Go browse, for example they asked it for Frank Sinatra and Ella Fitzgerald in front of a small orchestra.

… and why not images

I’ve tried to write this blog post a few times, only to have OpenAI apply transformer AI to a new area. Just today, OpenAI announced a new paper wherein it gets another transformer AI to complete an image. Same idea: give the AI millions of images, don’t tell it anything, then give it the top half of an image and it will produce one pixel value after another that continue the image. Look at it get the cat joke just from a sliver of paper visible in the input (the left column is its input, the right column is the original complete image, the middle four columns are its continuations).

So what does it mean?

These things are crazily impressive. It is rank speciesism to say “That’s not intelligent! It’s just doing something it’s been programmed taught trained fed so much data it recognizes what it should do,” especially when the format of its output is far beyond human capacity – you’ve been trained for years on Real Life but aren’t able to generate the sound of a band and Elvis Presley, or the pixels of a photorealistic image. These AIs are intelligent! And yet… they can’t maintain the plot or a musical idea over the entirety of a short story or a song. So what is it that we do when we create? Somehow we have an outline for the overall structure of the artwork, and fill in along its lines. I’m no expert, but it seems that creativity may be easier to implement than a general intelligence which can deal in concepts and know what words mean. None of these AIs can talk about their work. We can’t ask “What do you find hard? What do you enjoy? What were you aiming for when you went off on that tangent?” They’re sui generis, but the closest analogy seems to be idiots savants.

Posted in art, music, software | Leave a comment

house heating without natural gas

Some California cities are moving to ban natural gas connections in new construction! Burning gas creates CO2 that causes global warming, so just avoid burning fossil fuel by eliminating the gas connection. (Methane recovered from manure and garbage dumps could only provide at most 9% of California’s gas consumption.)

I’ve been near this cutting edge for over a decade. When we remodeled our house we avoided using natural gas to cheaply heat our house and domestic hot water, so we only use a small amount of natural gas for a cooktop and a clothes dryer.

The problem is most building contractors in California seem woefully unready for this legislation and trend. I’ve been on fancy modern house tours and even the super eco LEED-certified green mansions used natural gas for space heating!

Ductless mini-split heat pumps (heat condenser outside, thin liquid pipe to box on the wall that pours out hot or cold air) are getting more popular in California and you can buy cheap Chinese units at DIY stores. But heat pumps for radiant heating are still rare and unfamiliar. When we remodeled 13 years ago mini-splits weren’t commonly available, our architects preferred hydronic radiant floor heating (warm liquid flowing through tubes in the floor), and the wild and crazy heating subcontractor installed a 60,000 Btu/hr Unico Unichiller heat pump (and solar thermal tubes, and two heat exchangers, and more insanity I’ll cover in a separate post some day.)

CO2-free heating and cooling

This air-water heat pump was the only Unichiller model in northern California according to the service person. It needed expensive repair every winter, and when the bills became excessive and we wanted to replace it there weren’t any better options available for whole house (two story, fairly well insulated); the same small companies making air-water heat pumps in 2006 are much the same today (Aermec, Aqua Products, Chiltrix, SpacePak). And even though you can run a heat pump backwards to cool your house, few of the control systems for radiant heating know how to work in cooling mode, and no contractor wants to be liable for possible moisture and mold build-up in your floors.

Over the years a dozen heating and plumbing contractors have looked at our space and domestic hot water heating system and run away in terror. This last winter when we couldn’t face trying to heat a cold house with a bunch of electric space heaters any longer, I finally found a plumber who wasn’t intimidated by the complexity. He tore out most of the 14 (!) pumps and storage tanks and controllers and literally hundreds of feet of copper piping, to end up sending domestic hot water through a heat exchanger to provide heat for the radiant flooring. It works okay, but no heat pump hot water heater is rated to provide heat to an entire house as well (the Sanden CO2 hot water heater can do it for very low heating loads with a lot of provisos). So we’re heating our house with electrical resistance elements in a 4500 Watt 50 gallon domestic hot water tank with a Coefficient of Performance (how much heating you get from each unit of energy supplied) of… 1! This is inefficient and expensive, but at least I pay for “100% green” electricity beyond what my solar panels provide. What’s galling is the California energy guidelines for contractors promote a combined heat pump for domestic hot water and space heating, even though nothing much is actually available.

TL;DR : unless you have a tiny or super-insulated passive house, use ductless mini-split heat pumps for space heating and cooling, and a separate heat pump hot water tank. Anything else is an experiment for DIYers.

Posted in eco, Uncategorized | Leave a comment

social web: not so friendly

I’m incredulous that people have more than 100 “friends” on Facebook, let alone 1000+.

I used to regularly unfriend people, prompted by Jimmy Kimmel Live’s “National Unfriend Day.” Simply people I didn’t know well, didn’t interact with any more, or didn’t find their posts interesting. I never shared my contact list with Facebook (or Instagram, or WhatsApp) nor did I bulk-friend schoolmates and workmates, so I was never above 70 “friends”! I care about words, and Facebook’s use of “Friend” is an appalling perversion. RIP Google+ and its better “Circles” semantics. 

When I’m feeling unloved I view my 55 pending Facebook Friend Requests 😉. It’s nothing personal! Like Groucho Marx I’m dubious of anyone who would want to be a member of my unexclusive club. You can always add my “blog” to your iGoogle home page or other “RSS reader” to keep up with my ideas like it’s 1999 and we haven’t ceded control over the infogruel we thoughtlessly consume to awful corporations who have zero interest in our well-being.

Posted in web | Leave a comment

web: book reviews again

I have a substantial pile of books I’ve read that will injure me in an earthquake. I ought to write perspicacious pithy reviews of them. I could write them on Amazon, but why should Amazon own and profit from my words? I could write them on “a free, open and not-for-profit platform for reviewing absolutely anything, in any language,” but it seems a bit moribund. Instead I have this web site! Putting book reviews here will ensure they live forever in complete obscurity.

Oh no, not the semantic web again!

A long time ago I simply wrote a definition list in HTML in Blogger with each book title followed by a paragraph underneath. Then the idea of a semantic web came along: the web page should unambiguously tell machines that a chunk of writing is a review of a particular book rather than me advertising some books for sale, or writing about the author. And it should tell the machines it’s a review by skierpage, of a book with a particular title and ISBN, who gives it a rating of 3 out of 5 stars, etc.

Why bother?

Disclaimer: all the semantic web work below is probably irrelevant. If your web page is important according to Google’s PageRank algorithm, then Google will devote AI to figuring out what it says, even if it has no, or incorrect, semantic markup. So most of those making the effort to do this semantic markup are shady SEO (search engine optimization) sites, trying to convince you that if you jump through all these hoops or pay them to do it, then your site on topic X will somehow rise in search results from utter obscurity on the 20th page of results to mostly ignored on the 4th page.

hReview microformat

Back in 2011 the leading implementation of this idea for plain web pages was microformats: you probably already have the relevant pieces of text in your human-readable book review, so put additional markup (the ‘M’ in Hypertext Markup Language) around them identifying the bit that’s the rating, the summary, etc. using invisible HTML attributes like class=reviewer, class=rating, class=summary , etc. So I wrote a few reviews using an online tool to generate the necessary HTML, which I pasted into WordPress.

So many schemas

The hReview microformat is still going and supposedly Google still parses it when it crawls web pages. Some big guns of Web 2.0 (Google, Microsoft, Yahoo, and Yandex) came up with their own standard for structured data, similar but different, at the poorly named “a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.” This got more detailed and complicated than microformats: there are separate related schemas for a review by the person skierpage about a book authored by another person. And there are three ways you can put the machine-readable information into your web pages (two too many!).

Google provides a structured data markup helper to guide me in creating this markup, and then its structured data testing tool to see if I got it right. (There was another schema generator at now defunct, and other checkers at , , .) If you choose to put invisible markup in the page surrounding the text of your review ( calls this “microdata,” different from “microformat”), the HTML looks something like:

<!-- Microdata markup added by Google Structured Data Markup Helper. -->
  <div itemscope itemtype="" id="hreview-Sprawling,-very-good!">
  <meta itemprop="isbn" content="03-5091234-034">
  <meta itemprop="genre" content="Science Fiction">
  <meta itemprop="datePublished" content="2017-06-04">
  <h3>Sprawling, very good!</h3>
    <img itemprop="image" class="photo" src="" width="167" height="250" alt="cover of 'River of Gods'" align="left" style="margin-right: 1em"/>

  <div class="item">
    <a title="paperback at Amazon" href="" class="fn url">
      <span itemprop="name">River of Gods</span>
    <a href="">
      <span itemprop="author" itemscope itemtype="">
        <span itemprop="name">Ian McDonald</span>
  <p itemprop="review" itemscope itemtype="" class="description">
    <abbr itemprop="reviewRating" itemscope itemtype="" class="rating" title="4">
      <span itemprop="ratingValue">4</span>
    <span itemprop="reviewBody">This does a fantastic job of presenting the foreign culture of ... !</span>
    <meta itemprop="datePublished" content="2007-08-01">
    <span itemprop="author" itemscope itemtype="">
      <meta itemprop="name" content="skierpage">
      <meta itemprop="sameAs" content="">

The problem is, if I copy and paste this complicated HTML into WordPress’s post editor, it throws away much of the HTML markup, for example all the <meta> tags for information I don’t want to display, like <meta itemprop="datePublished" content="2007-08-01">. There are any number of dubious plug-ins to WordPress that support parts of schemas and want money for a professional version from desperate non-technical web site owners who see their traffic dropping and will clutch at straws hoping to appear higher in Google search results, but I don’t understand what these plug-ins do or don’t do.

Another representation for this structured data is JSON-LD, a completely separate representation of the semantic information that you stick in your web page and the reader never sees it; see A Guide to JSON-LD for Beginners. So maybe just sticking in a block of JSON-LD will work better (a guide to supporting it in WordPress is in section “Implementing Structured Data Using JSON-LD” in schema article at Hmmm…, instead of copying and pasting twice, can I put this inside WordPress myself? Maybe try Markup (JSON-LD) Structure in plug-in for WordPress? wpengine article has JSON-LD generators, but they’re not much good:

Tracking data

The problem with JSON-LD is I have to put the same information into the web page twice, first as HTML to display to human readers, and then again in this invisible data format. Or maybe use Handlebars or something to spit out both the block of JSON and the HTML. A spreadsheet may be best to track most of this information. It sucks for entering formatted text, but probably OK just for a pithy two-sentence review.

Generated HTML

Each book review in the spreadsheet should generate both the JSON-LD that web crawlers should read, and a human-readable book review. In the latter, I want things to link to something useful.

Author ISBN should probably link it to{ISBN}. Or I could accept that Jeff Bezos owns us and have it link to Amazon’s ASIN? Wikipedia’s Special:BookSources above creates a query, note how the dashes are removed in the query otherwise it doesn’t work. Spam-filled says you can use a 10-digit ISBN in place of ASIN, e.g., but you still have to remove the dashes.

For the cover, sometimes you can link to a cover image on English Wikipedia or Wikimedia Commons. You can mess around with an Amazon image URL; for some reason images on can’t be accessed using https, Firefox complains about “SSL_ERROR_BAD_CERT_DOMAIN.” The Internet Archive runs (hosts?) the Open Library Covers Repository.

Other items in the review, like the author name and book title, should link to Wikipedia pages if available. There’s no easy way to know that Ian McDonald’s English Wikipedia page is at, so the spreadsheet needs to have columns for Author URL and Book URL. (The alternative would be to store the Wikidata ‘Q’ numbers for each of these and work backwards from the wikidata info to the English Wikipedia pages, if any, for them.)

Coding it

Uh, scripting… Python? I quickly found a library pyexcel-ods to read a spreadsheet, and everyone uses seems jinja2 for HTML templating in Python. Adding these libraries mean dealing with all the ways to manage the Python libraries in a project; I have used pip and virtualenv in the past, but now teh hotness is pipenv, so install that and then add pyexcel-ods and jinja2. I’m rocking! In two hours I’ve read a line of my book reviews spreadsheet and generated some HTML

Then I upgraded to Fedora 32, and nothing works because its Python is now at version 3.8, so I have to coerce pipenv to rebuild everything. Guessing what to do, I run pipenv check and it tells me “In order to get an API Key you need a monthly subscription on, starting at $14.99″ Guess I won’t run that command then.

HTML generation

For now my script plus template just generates a big HTML file of every book review in the spreadsheet. I’ll want to create blog posts about related books, such as “Interesting science”, which means selecting a few chunks from the generated HTML and pasting them into WordPress. WordPress accepts HTML but really wants you to use its Gutenberg WYSIWYG blog post editor. Fortunately, it seems I can choose Gutenberg’s “Custom HTML” block and paste in all my generated HTML, including <script> tags containing JSON-LD. Finally, something easy! Part of me wants to make the HTML resemble Gutenberg’s blocks for WYSIWYG editing, but in theory I should go back into the spreadsheet to fix any errors.

Designing the JSON-LD

JSON (JavaScript Object Notation) is a simple file and data format to represent data. JSON-LD takes this and makes it slightly more complicated to represent Linked Data: on this Web page a person authored this review of a book which has its own author, another person(s). The details quickly degenerate into semantic triples, contexts, more three-letter acronyms like RDF, etc. has fairly simple examples of JSON-LD for a review, but they leave it unclear if just writing "author": "skierpage" is enough for computers to figure out that the person writing the review is the person who runs this web site, or whether I have to go highly complicated

"author": [
    "@type": "Person",
    "name": "skierpage",
    "sameAs": ""

To have multiple book reviews on a web page, you can output a separate JSON-LD <script> block along with each review’s chunk of HTML. This results in a lot of duplication of the reviewer (me) in the page. There are much fancier ways to organize this: you can output a single JSON-LD block containing all the reviews by putting them into top-level “@graph” object which isn’t mentioned on but is part of JSON-LD (or maybe use’s @itemList… when you’re designing a set of linked objects there’s always more than one way to do it). What’s unclear is if the JSON-LD should have a graph of books, each with a single review, or a graph of reviews, each of a single itemReviewed that’s a book:

	"@context": "",
	"@graph": [{
		"@type": "Review",
		"author": {
			"@type": "Person",
			"name": "skierpage",
			"sameAs": ""
		"datePublished": "2011-04-01",
		"reviewBody": "The book has a nice cover.",
		"itemReviewed": {
			"@type": "Book",
			"name": "River of Gods",
			"isbn": "03-5091234-0344",
			"author": "Ian McDonald"
		"reviewRating": {
			"@type": "Rating",
			"ratingValue": 4,
			"worstRating": 1,
			"bestRating": 5
		... another review

Google’s Rich Results Test doesn’t like the above, it complains the review is missing a description, publisher, and url. Isn’t this all obvious from the web page?

Maybe I don’t need author, says “Please note that author is special in that HTML 5 provides a special mechanism for indicating authorship via the rel tag. That is equivalent to this and may be used interchangeably.” However, WordPress doesn’t add rel="author" to its Posted by skierpage link.

Actually writing out the JSON-LD

There is a fancy pyld Python module that outputs JSON-LD but I’m not clear what it does over simply printing json.dumps(reviewJSON). So I just build up reviewJSON as a Python dictionary object:

    reviewJSON = {
      "@context": "",
      "@type": "Book",
      "author": bookDict["Author"],
      "isbn": bookDict["ISBN"],                                 
      "name": bookDict["Name"],
      "review": {
        "@type": "Review",
        "author": "skierpage", ## TODO: can this be derived/inferred from the page?
        "datePublished": TODAY, ## TODO: can this be derived/inferred from the page?
        ... lets you test the generated markup.

Summary: I’ve got this pretty much working!

Posted in books, semantic web, software | Leave a comment

Nikola Motors and its hydrogen truck story

In the current pandemic crisis this is like kicking a man when he’s down, but I still read uncritical stories on Nikola Motor Corporation, a Tesla wannabe since Tesla was still called “Tesla Motor Corporation.”

Nikola Motor has the attention span of a headless chicken and has been in endless hype mode for years. First it was going to use a gas turbine generator to power a big Class-8 semi truck. Then it switched to a breathtakingly grandiose scheme: zero-emissions hydrogen fuel cell trucks refueled at a network of 700 truck stops all making hydrogen on-site with renewable energy. That $3+ bn story excited many suppliers of hydrogen electrolysis, storage, pumping, and fuel cells, who have struggled with anemic demand for their products from the stalled and tiny market for hydrogen fuel cell passenger vehicles, and so Nikola got investments from them, truck part supplier Bosch, and truck body makers Fitzgerald and CNH/Iveco, all on the chance that the big idea might succeed and they’ll rake in the big bucks in orders. Of course when you’re a supplier and an investor you’ll probably be robbing yourself to make Nikola’s costs work for years…

It’s not an insane strategy, just unlikely and high risk.

But since 2016 Nikola has unveiled a slew of pointless garbage concepts. The NZT offroad vehicle. The Reckless military vehicle. The WAV personal watercraft. The Badger pickup truck. Two more truck models. And it can’t even stick to the hydrogen story! The Nikola Two and Tre commercial trucks will also come in a battery-only version without a hydrogen fuel cell. Not one of these vehicles has reached production, let alone general sales. Then late in 2019 Nikola made a pure B.S. announcement that it acquired battery tech from an unnamed university that will double the energy density, reduce weight by 40%, and halve the cost of lithium-ion batteries. If that’s really true then it can scrap the inefficient hydrogen detour, in fact scrap truck manufacturing and just make billions selling its breakthrough battery.

While Nikola farts around, battery electric trucks are available, though not yet in the biggest semi size. Just as with hydrogen fuel cell cars, there is 20x more investment, announcements, and actual sales of battery vehicles than HFCVs. You can buy battery electric buses and trucks right now, while hydrogen fuel cell is stuck in tiny demonstrations and pilot programs.

Nikola’s pitch for its hydrogen truck is to lease or sell the truck, maintenance, and fuel all-in for about $900,000 for a million-mile package, which is cheaper than diesel. But if the hydrogen doesn’t get really cheap then that package will not be profitable even when (if!) Nikola reaches scale on all the other parts of its scheme. Alas, “green” hydrogen from electrolysis remains much more expensive than making it from fossil fuel (primarily natural gas outside China). Bloomberg New Energy Finance thinks by 2030 green hydrogen will still require carbon taxes to be cost competitive. Sure, sometimes renewable energy is cheap, but if you only run the expensive electrolyzers when the sun is shining, then it dramatically raises your capital expenditure costs. If Nikola caves and gets its hydrogen from fossil fuel (where 95+% of all the hydrogen currently used comes from) that will annoy its hydrogen production and electrolysis investor/suppliers, and the optics of huge diesel trucks delivering dirty hydrogen to the truck stops will deservedly trash much of the green cred that Nikola has.

Finally CEO Trevor Milton has no engineering skills. “Big trucks avoiding the weight and recharge times of batteries by running on hydrogen that is produced at dedicated truck stops on routes.” Cool idea, bro, but ideas are cheap. What intellectual property, process innovation, or engineering breakthroughs has Nikola Motor Corporation got to realize the idea? Nothing.

Posted in eco | Leave a comment

skiing: technical wear as fashion

Keegan Brady wrote an article in GQ about the rise of “technical outerwear” in fashion. I wear and love this stuff while skiing, but once I’m off the mountain it goes in a storage tub.

He mentions the rise of The North Face jacket in the 1990s, but could have gone further, e.g. the Eddie Bauer/plaid flannel/Timberland boots rugged outdoor look from the late 1980s that accompanied the initial rise of the SUV. For centuries people have worn clothes to look as if they’re from somewhere exotic or doing something interesting, from sportswear to resort wear to surf clothes to today’s “I just descended the Matterhorn!” look.

Ever since Bogner in the 1970s went from ski racing apparel to one-piece après ski outfits for tanned Eurotrash, many, many technical ski and mountaineering clothing brands have suffered loss of credibility as they expand to sell clothing to casual skiers and hikers, while any innovation they came up with is copied by the rest of the sub-industry. As Descente (zip-off racing shells), Spyder (advanced fabrics), The North Face (integrated hoods), etc. lost their cachet, new boutique high-end brands like Phenix (multi-layer shells), Killy, Kjus (integrated stretchy wrist gaiters with thumb holes), and Arc’teryx (boxy articulated knees and elbows, complex cuts, waterproof zippers) showed up to be the new hot high-end technical wear. Arc’teryx has managed to expand into streetwear while remaining very expensive and fairly cutting edge, so it still has some credibility on the mountain. (Though you need reinforced Kevlar or Cordura shoulders for carrying gear!!)

It’s silly to wear this clothing on the streets of a city – “technical” gear for what activities, exactly? – but fashion has already been about delirious dreams and dressing up. Nothing wrong with that, but if you’re just wandering around the city why not wear clothes that are beautiful to look at by Jhane Barnes?

As worn by my heroes…

I’m intrigued by clothing lines like Veilance by Arc’teryx and Errolson Hugh’s intense Acronym that divorce from any sport and aim only to be meaninglessly extreme technical streetwear for its own sake. William Gibson loves this stuff (and thinks eloquently about clothing):

Hugh Errolson with William Gibson wearing Acronym gear in 2017
Sorry Mr. Gibson, you’re still not a tactical urban ninja
(“Uncle Bill” Instagram post by Errolson Hugh on the left @erlsn.acr February 24, 2017)

and so does John Mayer. Maybe I could join them… but I’m not inspired to open my wallet to $700+ clothing items without trying them on, and since Jhane Barnes exited menswear 😢 I never go to fancy clothing stores.

Posted in design, skiing | Leave a comment