music: wondering at Stevie Wonder

Play Stevie Wonder’s immortal run of albums – Music of My Mind, Talking Book, Innervisions, Fulfillingness’ First Finale, Songs in the Key of Life – and you will be repeatedly floored by his artistry and talent. What brings tears of joy are the elements I’d forgotten amongst the greats: the perfect rainy-day funk of “Tuesday Breakup”, the burning vocals in “It Ain’t No Use,” the zOMG what did he just do chord changes in the B melody of “Please Don’t Go,” the Nokia ringtone teleported into “All Day Sucker,” …

This Slate article is emphatic: “arguably the greatest sustained run of creativity in the history of popular music.” Is it “greater” than Joni Mitchell’s run, or Elvis Costello’s first five albums, or the Beatles’ lighting the rocket engines around the release of Rubber Soul? The obvious answer is they’re incomparable in both senses of the word.

But I’ll give it a go. Stevie Wonder’s lyrics can’t compete with Joni or Elvis, they’re at best direct expressions of emotions but often convoluted without strong wordplay. Co-producers Robert Margouleff and Malcolm Cecil on the first four are deservedly famous for advancing synthesizers with their T.O.N.T.O. system and use of synthesizers for bass, strings, harmonies – everything but drums.

Rhythm, not drums

Stevie Wonder is obviously outrageously talented on keyboards, harmonica, and singing. It’s easy to overlook his drumming; he’s not deeply in the pocket, or super-heavy, or flashy. He can ride the hi-hat like a disco drummer, but his drumming doesn’t propel the song, it’s another rhythmic element subservient to musical ideas. Stevie gets to play drums and Moog bass and percussive keyboards, so no one instrument has to drive.

Songs in the Key of money

I bought Innervisions and Fulfillingness’ First Finale when they came out. Re-listening, I forgot how bleak Innervisions is; Stevie Wonder moved away from love songs and heartache songs to look around, and he was distressed by what he saw under the presidency of Richard Nixon.

When Songs in the Key of Life came out as a double-album with at first an additional bonus 7-inch EP, I balked. $13.98 was a lot of money! Also some low-talent British singer re-made “Isn’t She Lovely” as his own mawkish single when Stevie Wonder was unwilling to shorten the song, and BBC Radio 1 stupidly played this over and over instead of the far superior original album track. Over time I grew familiar with the towering songs, including “As” and “If it’s Magic” because friends had the double album. Listening to it on a streaming service, the additional tracks from the bonus single is a revelation. “All Day Sucker” is unlike anything Stevie Wonder did, and “Saturn” is trippy. And the amount of time and care lavished on the record is incredible:

Nonstop sessions stretched across two-and-a-half years, two coasts, and four studios: Crystal Sound in Hollywood, New York City’s Hit Factory, and the Record Plant outposts in Los Angeles and Sausalito. More often than not, he could be found in one of those spaces, sometimes for 48 hours at a time, chasing his muse with a rotating crew of engineers and support musicians. Over 130 people were involved in the recording, including Herbie Hancock, George Benson, “Sneaky Pete” Kleinow and Minnie Riperton. “If my flow is goin’, I keep on until I peak” became Wonder’s mantra.

Inside Stevie Wonder’s Epic ‘Songs in the Key of Life’

We shall never see its like again.

Posted in music | Leave a comment

music: Trevor Horn and the Buggles in 1979

I revere producers as much as musicians and songwriters. I was dimly aware of producers, starting with the mysterious “produced by Bones Howe” in big letters on the back of some record… I thought it was the Carpenters but now I can’t find it. What really piqued my interest was Chic’s in-your-face credit on most of their early albums:

Composed, produced, arranged, conducted, and performed by Nile Rodgers and Bernard Edwards for the CHIC Organization, Ltd.

and then following all the records Chic and Nile and ‘Nard produced. It’s a joy to revisit a classic song, check the credits, and realize “Wait, that’s yet another great song produced by…” such as unheralded Alan Tarney (who liked his own song “Once in a While” so much he produced it on two different records) or the almighty Arif Mardin. Then you can lose yourself in Wikipedia and Discogs finding all their production credits.

And so to Trevor Horn, the bass player, singer, video (killed the radio) star, and maximalist producer. As a producer he’s probably most famous for his work with Frankie Goes to Hollywood (1984 strikes again!), and my favorite, his spectacular production for ABC’s The Lexicon of Love (when producer full of ideas meets hungry band really going for it, and the magic happens). The guitars on the latter sound 10 feet tall on a great stereo.

But there’s a lot of prehistory to Trevor Horn. Listening to the deluxe reissue of the legendary Dusty in Memphis by the great British pop-soul singer Dusty Springfield reminded me of her unexpectedly solid dance single that I bought on vinyl years ago… turns out it was written by Trevor Horn, his partner in Buggles Geoff Downes, and Bruce Woolley.

1979 was certainly Buggles’ annus mirabilis. They also made the “Caribbean Air Control” single as Big A, made the impressively weird space disco-pop “Star to Star” album as Chrome, then released “Video Killed the Radio Star” first by Bruce Wooley & the Camera Club and then it broke big as Buggles. Then only 10 days into 1980 Buggles released “The Age of Plastic” album. That’s quite something, it’s about as productive as Nile Rodgers & Bernard Edwards of Chic were at the same time.

Posted in music | Leave a comment

computers generating art

First came text

GPT-2 and other AI Transformer models are impressive. After you hand the AI an enormous corpus of text, you give it some text and it continues, generating character after character that make up recognizable, even creative, language maintaining the style. Here’s how https://talktotransformer.com/, an online GPT-2 system, continued when I fed it the words in bold

“The AI ruined my life” muttered Denise.

“You are the target of someone else’s revenge” replied Cale.

“And you did nothing to prevent it”

“Your words…”

“Yes, that is what I say” smiled Cale.

“So the boys are more than just a valuable asset”

“Yeah”

“But I have things to tell you guys now”

“Huh?”

Cale looked at it. “If we were to adopt our normal lives, we wouldn’t be able to afford everything we want in this world. So for us, we must keep working. It’s this effect of working hard that we were able to obtain everything, even the rare

It’s not simply regurgitating matching phrases that it has ingested. In compressing down the 40 gigabytes of text it was fed into some kind of internal representation in its neural network of only 1.5 billion parameters, it has noticed certain features of what you give it and they trigger… something… in its neural network that riffs on what you provided. Give it the same starting text again and it goes off in a different direction.


“The AI ruined my life” muttered Denise.

“Ha! You’re quite an interesting girl. Like a puppet.”

A sorcerer suddenly appeared next to the table and pointed his wand at Denise, saying “Use your magic!”

A short spell field with dazzling golden lights appeared above Denise’s head and soon a single tablet appeared on the table and began to write. Then the tablet turned around and charged towards the sky with a radiant light.

The tablet charged forwards quickly!

The intensity of the light was frightening. Once it passed the boundary of the spell field, it became all-encompassing. The tablet was so bright that the color seemed to change from a golden to a blue, and not only did it look beautiful,

And again:

“The AI ruined my life” muttered Denise. “I looked over at Bobby, who was sitting with his back against the wall and moving his head side to side. He said nothing. His mouth was hanging open. I then turned back to the door and said, “What the hell is going on?” The door opened and I was met by my father, who had been in hiding for about three weeks and was waiting for me at the door. Bobby was still sitting there, looking over at his father. But in this instance the looks were no longer new. Instead, they had become familiar, like a sick man watching a recently emerged corpse of his loved one. His face turned beet red and he said, “You didn’t make my greatest

Transformer models don’t just continue writing in almost any style. With additional fine-tuning they can recognize a question and answer format, or a simple math problem, or a multiple choice format, or a request to summarize, … and continue with the answer to the problem better than most humans. And the newest GPT-3 (eleventy billion parameters in the model! fed a trillion words! gargantuan PDF paper!) can do all these without any fine-tuning! It’s ingested so much text that if you give it one or a few examples of what you want it will figure out what you’re asking for, just as a kid can participate in a made-up game without having to go to classes in that game. The following interaction, getting it to use a made-up word, is amazing to me:

To do a “farduddle” means to jump up and down really fast. An example of a sentence that uses the word farduddle is:
One day when I was playing tag with my little sister, she got really excited and she started doing these crazy farduddles.

It’s “merely” responding to input, but be honest, that’s all you’re doing when someone asks “How are you?” or “What day is it?”

It’s been my hope for decades (my thoughts in 2006, 2010) that some AI would gain enough smarts to understand language, then overnight it would ingest every document on the Internet and be the smartest thing in the world. Instead, AI researchers force-feed a huge subset of the Internet into a language model and it “does language” without understanding what it’s doing or what it all means.

OK so music…

You can apply a similar approach to music. Train a transformer on the musical note instructions in MIDI files, and then give it some starting parameters, and it can generate further musical instructions. OpenAI built such a system, called MuseNet. Here is what transpired when world-unfamous producer skierpage told MuseNet to improvise in the style of Disney starting from Beethoven’s Für Elise. The piano continues well enough, but then the meth-addled drummer comes in from another planet and goes nuts, and then it ends with a piano flourish. I can’t imagine a human being coming up with this.


OpenAI has now moved on to generating actual waveforms of music with its new system, called Jukebox. I think the main motivation is it can generate someone singing lyrics as well as instrumental performances. This is crazy. It learns to compress digital music files at 44,000 samples a second down to a much smaller compressed representation that only it understands, and then if you ask for music in some style it will create music in that compressed representation and “blow it back up” into 10,000,000 samples making up a musical waveform.

Here’s “Rock, in the style of Elvis Presley.”

It’s weird, like a broken radio tuning into a performance by a rock and roll garage band in love with Elvis but they only heard his songs on their own broken radio. And the AI has learned that Elvis was frequently interrupted by crowd noises and cheering, so after a while it throws that in too.

The lyrics on this are confusing, OpenAI says “All the lyrics below have been co-written by a language model and OpenAI researchers.” But if you want crazy lyrics, someone found Jukebox’s continuations of Rick Astley. Jukebox mostly trundles along in that inimitable 80s Stock-Aitken-Waterman style, sometimes adding some novel production ideas or a keyboard solo just like the original producers would mete out new ideas while sticking to the format. But its muffled lyrics include at 1:37 “you wouldn’t get this spaghetti on a guy… Stretch my 🍆😂. Later Rickbot goes bleak: 1:53 “Kiss the boat Denny I’m Satan’s pirate arrr”, and 3:53 “You know the rules and so you have to die.”

To stress the same point as the text generator, the AI isn’t simply pasting in bits of music that it has stored matching the starting music. But it is calling on the… something… that it has gleaned from ingesting “1.2 million songs (600,000 of which are in English), paired with the corresponding lyrics and metadata from LyricWiki.” Go browse, it’s got the idea of Frank Sinatra and Ella Fitzgerald in front of a small orchestra.

… and why not images

I’ve tried to write this blog post a few times, only to have OpenAI apply transformer AI to a new area. Just today, OpenAI announced a new paper wherein it gets another transformer AI to complete an image. Same idea: give the AI millions of images, don’t tell it anything, then give it the top half of an image and it will produce one pixel value after another that continue. Watch it get the cat joke just from a sliver of paper visible in the input (the left column is its input, the right column is the original complete image, the middle four columns are its continuations).

So what does it mean?

These things are crazily impressive. It is rank speciesism to say “That’s not intelligent! It’s just doing something it’s been programmed taught trained fed so much data it recognizes what it should do,” especially when the format of its output is far beyond human capacity – you’ve been trained for years on Real Life but aren’t able to generate the sound of a band and Elvis Presley, or the pixels of a photorealistic image. These AIs are intelligent! And yet… they can’t maintain the plot or a musical idea over the entirety of a short story or a song. So what is it that we do when we create? Somehow we have an outline for the overall structure of the artwork, and fill in along its lines. I’m no expert, but it seems that creativity may be easier to implement than a general intelligence which can deal in concepts and know what words mean. None of these AIs can talk about their work. We can’t ask “What do you find hard? What do you enjoy? What were you aiming for when you went off on that tangent?” They’re sui generis, but the closest analogy seems to be idiots savants.

Posted in art, music, software | Leave a comment

house heating without natural gas

Some California cities are moving to ban natural gas connections in new construction! Burning gas creates CO2 that causes global warming, so just avoid burning fossil fuel by eliminating the gas connection. (Methane recovered from manure and garbage dumps could only provide at most 9% of California’s gas consumption.)

I’ve been near this cutting edge for over a decade. When we remodeled our house we avoided using natural gas to cheaply heat our house and domestic hot water, so we only use a small amount of natural gas for a cooktop and a clothes dryer.

The problem is most building contractors in California seem woefully unready for this legislation and trend. I’ve been on fancy modern house tours and even the super eco LEED-certified green mansions used natural gas for space heating!

Ductless mini-split heat pumps (heat condenser outside, thin liquid pipe to box on the wall that pours out hot or cold air) are getting more popular in California and you can buy cheap Chinese units at DIY stores. But heat pumps for radiant heating are still rare and unfamiliar. When we remodeled 13 years ago mini-splits weren’t commonly available, our architects preferred hydronic radiant floor heating (warm liquid flowing through tubes in the floor), and the wild and crazy heating subcontractor installed a 60,000 Btu/hr Unico Unichiller heat pump (and solar thermal tubes, and two heat exchangers, and more insanity I’ll cover in a separate post some day.)

CO2-free heating and cooling

This air-water heat pump was the only Unichiller model in northern California according to the service person. It needed expensive repair every winter, and when the bills became excessive and we wanted to replace it there weren’t any better options available for whole house (two story, fairly well insulated); the same small companies making air-water heat pumps in 2006 are much the same today (Aermec, Aqua Products, Chiltrix, SpacePak). And even though you can run a heat pump backwards to cool your house, few of the control systems for radiant heating know how to work in cooling mode, and no contractor wants to be liable for possible moisture and mold build-up in your floors.

Over the years a dozen heating and plumbing contractors have looked at our space and domestic hot water heating system and run away in terror. This last winter when we couldn’t face trying to heat a cold house with a bunch of electric space heaters any longer, I finally found a plumber who wasn’t intimidated by the complexity. He tore out most of the 14 (!) pumps and storage tanks and controllers and literally hundreds of feet of copper piping, to end up sending domestic hot water through a heat exchanger to provide heat for the radiant flooring. It works okay, but no heat pump hot water heater is rated to provide heat to an entire house as well (the Sanden CO2 hot water heater can do it for very low heating loads with a lot of provisos). So we’re heating our house with electrical resistance elements in a 4500 Watt 50 gallon domestic hot water tank with a Coefficient of Performance (how much heating you get from each unit of energy supplied) of… 1! This is inefficient and expensive, but at least I pay for “100% green” electricity beyond what my solar panels provide. What’s galling is the California energy guidelines for contractors promote a combined heat pump for domestic hot water and space heating, even though nothing much is actually available.

TL;DR : unless you have a tiny or super-insulated passive house, use ductless mini-split heat pumps for space heating and cooling, and a separate heat pump hot water tank. Anything else is an experiment for DIYers.

Posted in eco, Uncategorized | Leave a comment

social web: not so friendly

I’m incredulous that people have more than 100 “friends” on Facebook, let alone 1000+.

I used to regularly unfriend people, prompted by Jimmy Kimmel Live’s “National Unfriend Day.” Simply people I didn’t know well, didn’t interact with any more, or didn’t find their posts interesting. I never shared my contact list with Facebook (or Instagram, or WhatsApp) nor did I bulk-friend schoolmates and workmates, so I was never above 70 “friends”! I care about words, and Facebook’s use of “Friend” is an appalling perversion. RIP Google+ and its better “Circles” semantics. 

When I’m feeling unloved I view my 55 pending Facebook Friend Requests 😉. It’s nothing personal! Like Groucho Marx I’m dubious of anyone who would want to be a member of my unexclusive club. You can always add my “blog” to your iGoogle home page or other “RSS reader” to keep up with my ideas like it’s 1999 and we haven’t ceded control over the infogruel we thoughtlessly consume to awful corporations who have zero interest in our well-being.

Posted in web | Leave a comment

web: book reviews again

I have a substantial pile of books I’ve read that will injure me in an earthquake. I ought to write perspicacious pithy reviews of them. I could write them on Amazon, but why should Amazon own and profit from my words? I could write them on https://lib.reviews/ “a free, open and not-for-profit platform for reviewing absolutely anything, in any language,” but it seems a bit moribund. Instead I have this web site! Putting book reviews here will ensure they live forever in complete obscurity.

Oh no, not the semantic web again!

A long time ago I just wrote a definition list in HTML in Blogger with each book title followed by a paragraph underneath. Then the idea of a semantic web came along: the web page should unambiguously tell machines that a chunk of writing is a review of a particular book rather than me advertising some books for sale, or writing about the author. And it should tell the machines it’s a review by skierpage, of a book with a particular title and ISBN, who gives it a rating of 3 out of 5 stars, etc.

Why bother?

Disclaimer: all the semantic web work below is probably irrelevant. If your web page is important according to Google’s PageRank algorithm, then Google will devote AI to figuring out what it says, even if it has no, or incorrect, semantic markup. So most of those making the effort to do this semantic markup are shady SEO (search engine optimization) sites, trying to convince you that if you jump through all these hoops or pay them to do it, then your site on topic X will somehow rise in search results from utter obscurity on the 20th page of results to mostly ignored on the 4th page.

hReview microformat

Back in 2011 the leading implementation of this idea for plain web pages was microformats: you probably already have the bits of text in your human-readable book review, so put additional markup (the ‘M’ in Hypertext Markup Language) around them identifying the bit that’s the rating, the summary, etc. using invisible HTML attributes like class=reviewer, class=rating, class=summary , etc. So I wrote a few reviews using an online tool to generate the necessary HTML, which I pasted into WordPress.

So many schemas

The hReview microformat is still going and supposedly Google still parses it when it crawls web pages. Some big guns of Web 2.0 (Google, Microsoft, Yahoo, and Yandex) came up with their own standard for structured data, similar but different, at the poorly named schema.org: “a collaborative, community activity with a mission to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.” This got more detailed and complicated than microformats: there are separate related schemas for a review by the person skierpage about a book authored by another person. And there are three ways you can put the machine-readable information into your web pages (two too many!).

Google provides a structured data markup helper to guide me in creating this markup, and then its structured data testing tool to see if I got it right. (There was another schema generator at tools.seochat.com now defunct, and another checker at linter.structured-data.org/ .) If you choose to put invisible markup in the page surrounding the text of your review (schema.org calls this “microdata,” different from “microformat”), the HTML looks something like:

<!-- Microdata markup added by Google Structured Data Markup Helper. -->
  <div itemscope itemtype="http://schema.org/Book" id="hreview-Sprawling,-very-good!">
  <meta itemprop="isbn" content="03-5091234-034">
  <meta itemprop="genre" content="Science Fiction">
  <meta itemprop="datePublished" content="2017-06-04">
  <h3>Sprawling, very good!</h3>
  <p>
    <img itemprop="image" class="photo" src="http://ecx.images-amazon.com/images/I/51Gvu3UlqGL.jpg" width="167" height="250" alt="cover of 'River of Gods'" align="left" style="margin-right: 1em"/>
  </p>

  <div class="item">
    <a title="paperback at Amazon" href="http://www.amazon.com/River-Gods-Ian-McDonald/dp/1591025958" class="fn url">
      <span itemprop="name">River of Gods</span>
    </a>
    by
    <a href="http://en.wikipedia.org/wiki/Ian_McDonald_%28British_author%29">
      <span itemprop="author" itemscope itemtype="http://schema.org/Person">
        <span itemprop="name">Ian McDonald</span>
      </span>
    </a>
  </div>
  <p itemprop="review" itemscope itemtype="http://schema.org/Review" class="description">
    <abbr itemprop="reviewRating" itemscope itemtype="http://schema.org/Rating" class="rating" title="4">
      <span itemprop="ratingValue">4</span>
      5
    </abbr>
    <span itemprop="reviewBody">This does a fantastic job of presenting the foreign culture of ... !</span>
    <meta itemprop="datePublished" content="2007-08-01">
    <span itemprop="author" itemscope itemtype="http://schema.org/Person">
      <meta itemprop="name" content="skierpage">
      <meta itemprop="sameAs" content="http://www.skierpage.com/about/">
    </span>
  </p>
</div>

The problem is, if I copy and paste this complicated HTML into WordPress’s post editor, it throws away much of the HTML markup, for example all the <meta> tags for information I don’t want to display, like <meta itemprop="datePublished" content="2007-08-01">. There are any number of dubious plug-ins to WordPress that support parts of schema.org schemas and want money for a professional version from desperate non-technical web site owners who see their traffic dropping and will clutch at straws hoping to appear higher in Google search results, but I don’t understand what these plug-ins do or don’t do.

Another representation for this structured data is JSON-LD, a completely separate representation of the semantic information that you stick in your web page and the reader never sees it. So maybe just sticking in a block of JSON-LD will work better (a guide to supporting it in WordPress is in section “Implementing Structured Data Using JSON-LD” in schema article at torquemag.io. Hmmm…, instead of copying and pasting twice, can I put this inside WordPress myself? Maybe try Markup (JSON-LD) Structure in schema.org plug-in for WordPress? wpengine article has JSON-LD generators, but they’re not much good:

Tracking data

The problem with JSON-LD is I have to put the same information into the web page twice, first as HTML to display to human readers, and then again in this invisible data format. Or maybe use Handlebars or something to spit out both the block of JSON and the HTML. A spreadsheet may be best to track most of this information. It sucks for entering formatted text, but probably OK just for a pithy two-sentence review.

Generated HTML

Each book review in the spreadsheet should generate both the JSON-LD that web crawlers should read, and a human-readable book review. In the latter, I want things to link to something useful.

Author ISBN should probably link it to https://en.wikipedia.org/wiki/Special:BookSources/0060932902{ISBN}. Or I could accept that Jeff Bezos owns us and have it link to Amazon’s ASIN? Wikipedia’s Special:BookSources above creates a query https://www.amazon.com/s?k=0060932902, note how the dashes are removed in the query otherwise it doesn’t work. Spam-filled https://kindlepreneur.com/amazon-search-url-isbn-ref/ says you can use a 10-digit ISBN in place of ASIN, e.g. https://www.amazon.com/dp/0060932902, but you still have to remove the dashes.

Other items in the review, like the author name and book title, should link to Wikipedia pages if available. There’s no easy way to know that Ian McDonald’s English Wikipedia page is at https://en.wikipedia.org/wiki/Ian_McDonald_(British_author), so the spreadsheet needs to have columns for Author URL and Book URL. (The alternative would be to store the Wikidata ‘Q’ numbers for each of these and work backwards from the wikidata info to the English Wikipedia pages, if any, for them.)

Coding it

Uh, scripting… Python? I quickly found a library to read a spreadsheet, and everyone uses seems jinja2 for HTML templating in Python. Adding these libraries mean dealing with all the ways to manage the Python libraries in a project; I have used pip and virtualenv in the past, but now teh hotness is pipenv, so install that and then add pyexcel-ods and jinja2. I’m rocking! In two hours I’ve read a line of my book reviews spreadsheet and generated some HTML

Then I upgraded to Fedora 32, and nothing works because its Python is now python3.8, so I have to coerce pipenv to rebuild everything. Guessing what to do, I run pipenv check and it tells me “In order to get an API Key you need a monthly subscription on pyup.io, starting at $14.99″ Guess I won’t run that command then.

Writing JSON-LD

JSON (JavaScript Object Notationis a simple file and data format to represent data. JSON-LD takes this and makes it slightly more complicated to represent Linked Data: on this Web page a person authored this review of a book which has its own author, another person(s). The details quickly degenerate into semantic triples, contexts, more three-letter acronyms like RDF, etc. Schema.org has fairly simple examples of JSON-LD for a review, but they leave it unclear if just writing "author": "skierpage" is enough for computers to figure out that the person writing the review is the person who runs this web site, or whether I have to go highly complicated

"author": [
  {
    "@type": "Person",
    "name": "skierpage",
    "sameAs": "https://www.skierpage.com/people/skierpage/foaf.rdf"
  }
],

To have multiple book reviews on a web page, you can put them in a top-level “graph” object. What’s unclear is if the page should have a graph of books, each with a single review, or a graph of reviews, each of a single itemReviewed that’s a book.

{
	"@context": "http://schema.org/",
	"@graph": [{
		"@type": "Review",
		"author": {
			"@type": "Person",
			"name": "skierpage",
			"sameAs": "https://www.skierpage.com/people/skierpage/foaf.rdf"
		},
		"datePublished": "2011-04-01",
		"reviewBody": "The book has a nice cover.",
		"itemReviewed": {
			"@type": "Book",
			"name": "River of Gods",
			"isbn": "03-5091234-0344",
			"author": "Ian McDonald"
		},
		"reviewRating": {
			"@type": "Rating",
			"ratingValue": 4,
			"worstRating": 1,
			"bestRating": 5
		}
	},
	{
		... another review
	}]
}

Google’s validator doesn’t like the above, it complains the review is missing a description, publisher, and url. Isn’t this all obvious from the web page?

Maybe I don’t need author, https://schema.org/Review says “Please note that author is special in that HTML 5 provides a special mechanism for indicating authorship via the rel tag. That is equivalent to this and may be used interchangeably.”

You can dump a Python object with just json.dumps(bookReview), or there is a fancy pyld Python module that outputs JSON-LD. So I could have stock Python dictionaries with some of the unchanging stuff (me, worst/bestRating), etc. to which I add review-specific info, or I could try and build the LinkedData structures and feed them into pyld

Summary: still working on this.

Posted in books, semantic web, software | Leave a comment

Nikola Motors and its hydrogen truck story

In the current pandemic crisis this is like kicking a man when he’s down, but I still read uncritical stories on Nikola Motor Corporation, a Tesla wannabe since Tesla was still called “Tesla Motor Corporation.”

Nikola Motor has the attention span of a headless chicken and has been in endless hype mode for years. First it was going to use a gas turbine generator to power a big Class-8 semi truck. Then it switched to a breathtakingly grandiose scheme: zero-emissions hydrogen fuel cell trucks refueled at a network of 700 truck stops all making hydrogen on-site with renewable energy. That $3+ bn story excited many suppliers of hydrogen electrolysis, storage, pumping, and fuel cells, who have struggled with anemic demand for their products from the stalled and tiny market for hydrogen fuel cell passenger vehicles, and so Nikola got investments from them, truck part supplier Bosch, and truck body makers Fitzgerald and CNH/Iveco, all on the chance that the big idea might succeed and they’ll rake in the big bucks in orders. Of course when you’re a supplier and an investor you’ll probably be robbing yourself to make Nikola’s costs work for years…

It’s not an insane strategy, just unlikely and high risk.

But since 2016 Nikola has unveiled a slew of pointless garbage concepts. The NZT offroad vehicle. The Reckless military vehicle. The WAV personal watercraft. The Badger pickup truck. Two more truck models. And it can’t even stick to the hydrogen story! The Nikola Two and Tre commercial trucks will also come in a battery-only version without a hydrogen fuel cell. Not one of these vehicles has reached production, let alone general sales. Then late in 2019 Nikola made a pure B.S. announcement that it acquired battery tech from an unnamed university that will double the energy density, reduce weight by 40%, and halve the cost of lithium-ion batteries. If that’s really true then it can scrap the inefficient hydrogen detour, in fact scrap truck manufacturing and just make billions selling its breakthrough battery.

While Nikola farts around, battery electric trucks are available, though not yet in the biggest semi size. Just as with hydrogen fuel cell cars, there is 20x more investment, announcements, and actual sales of battery vehicles than HFCVs. You can buy battery electric buses and trucks right now, while hydrogen fuel cell is stuck in tiny demonstrations and pilot programs.

Nikola’s pitch for its hydrogen truck is to lease or sell the truck, maintenance, and fuel all-in for about $900,000 for a million-mile package, which is cheaper than diesel. But if the hydrogen doesn’t get really cheap then that package will not be profitable even when (if!) Nikola reaches scale on all the other parts of its scheme. Alas, “green” hydrogen from electrolysis remains much more expensive than making it from fossil fuel (primarily natural gas outside China). Bloomberg New Energy Finance thinks by 2030 green hydrogen will still require carbon taxes to be cost competitive. Sure, sometimes renewable energy is cheap, but if you only run the expensive electrolyzers when the sun is shining, then it dramatically raises your capital expenditure costs. If Nikola caves and gets its hydrogen from fossil fuel (where 95+% of all the hydrogen currently used comes from) that will annoy its hydrogen production and electrolysis investor/suppliers, and the optics of huge diesel trucks delivering dirty hydrogen to the truck stops will deservedly trash much of the green cred that Nikola has.

Finally CEO Trevor Milton has no engineering skills. “Big trucks avoiding the weight and recharge times of batteries by running on hydrogen that is produced at dedicated truck stops on routes.” Cool idea, bro, but ideas are cheap. What intellectual property, process innovation, or engineering breakthroughs has Nikola Motor Corporation got to realize the idea? Nothing.

Posted in eco | Leave a comment

skiing: technical wear as fashion

Keegan Brady wrote an article in GQ about the rise of “technical outerwear” in fashion. I wear and love this stuff while skiing, but once I’m off the mountain it goes in a storage tub.

He mentions the rise of The North Face jacket in the 1990s, but could have gone further, e.g. the Eddie Bauer/plaid flannel/Timberland boots rugged outdoor look from the late 1980s that accompanied the initial rise of the SUV. For centuries people have worn clothes to look as if they’re from somewhere exotic or doing something interesting, from sportswear to resort wear to surf clothes to today’s “I just descended the Matterhorn!” look.

Ever since Bogner in the 1970s went from ski racing apparel to one-piece après ski outfits for tanned Eurotrash, many, many technical ski and mountaineering clothing brands have suffered loss of credibility as they expand to sell clothing to casual skiers and hikers, while any innovation they came up with is copied by the rest of the sub-industry. As Descente (zip-off racing shells), Spyder (advanced fabrics), The North Face (integrated hoods), etc. lost their cachet, new boutique high-end brands like Phenix (multi-layer shells), Killy, Kjus (integrated stretchy wrist gaiters with thumb holes), and Arc’teryx (boxy articulated knees and elbows, complex cuts, waterproof zippers) showed up to be the new hot high-end technical wear. Arc’teryx has managed to expand into streetwear while remaining very expensive and fairly cutting edge, so it still has some credibility on the mountain. (Though you need reinforced Kevlar or Cordura shoulders for carrying gear!!)

It’s silly to wear this clothing on the streets of a city – “technical” gear for what activities, exactly? – but fashion has already been about delirious dreams and dressing up. Nothing wrong with that, but if you’re just wandering around the city why not wear clothes that are beautiful to look at by Jhane Barnes?

As worn by my heroes…

I’m intrigued by clothing lines like Veilance by Arc’teryx and Errolson Hugh’s intense Acronym that divorce from any sport and aim only to be meaninglessly extreme technical streetwear for its own sake. William Gibson loves this stuff (and thinks eloquently about clothing):

Hugh Errolson with William Gibson wearing Acronym gear in 2017
Sorry Mr. Gibson, you’re still not a tactical urban ninja
(“Uncle Bill” Instagram post by Errolson Hugh on the left @erlsn.acr February 24, 2017)

and so does John Mayer. Maybe I could join them… but I’m not inspired to open my wallet to $700+ clothing items without trying them on, and since Jhane Barnes exited menswear 😢 I never go to fancy clothing stores.

Posted in design, skiing | Leave a comment

music: relationship advice or Cocteau Twins?

Quick quiz: who wrote “Intimacy is when we’re in the same place at the same time. Dealing honestly with how we feel, and who we really are. That’s what grown-ups do; that is mature thinking” ?

A: noted relationship philosopher Elizabeth Fraser sang this in the exquisite “Half-Gifts”! (full lyrics). Read The Guardian piece “Elizabeth Fraser: the Cocteau Twins and me“: the Cocteau Twins made their most nakedly beautiful music after Ms. Fraser was broken down by tensions and her failed relationship with band member Robin Guthrie.

“Half-Gifts,” from the album Milk & Kisses and the compilation Lullabies to Violaine

I had grasped some of the lyrics, thought I thought she was saying “I have no friends” at the end. Her journey to partial intelligibility from the ineffable mystery of the songs on Blue Bell Knoll and earlier is interesting; there’s a good collection of her comments on it. The mystery lies in the incoherence, so you should only read fans’ guesses at the lyrics of a song after listening to it over and over.

As I’ve said before, Prince mentioned he listened to the Cocteau Twins and that’s all it took for me.

Posted in music | Leave a comment

making movies surrounded by real virtual environments

The Volume” for The Mandalorian at Manhattan Beach Studios

I’ve never seen it, but this in-depth article on filming “The Mandalorian” is fascinating. Instead of filming actors in front of an enormous green screen and later replacing it with CGI background and special effects, as you see in Game of Thrones “making of” featurettes, the actors act in “the Volume,” encircled by 270 degrees of video wall and a video ceiling that display the surroundings of the scene rendered photorealistically from the point of view of the camera lens as they film! It’s the old technique of projecting the scenery behind actors in a car while one pretend to steer it, times 10,000; the Holodeck from Star Trek: Next Generation brought to life.

The consequences of this are far-reaching. They can shoot a desert scene as dawn is breaking for 10 hours. The actors see their surroundings, they don’t have to imagine them. One you wouldn’t think of is it removes much of the need for set lighting. The wall of LEDs *is* the ambient light of that desert dawn (although it doesn’t work as well for direct sunlight). It means metallic things naturally reflect accurate details of the scene.

The Star Wars environments tend to be based on real places on Earth, so they have a “scanning and photogrammetry team that would travel to locations such as Iceland and Utah to shoot elements for the Star Wars planets. … the scanner straps six cameras to their body which all fire simultaneously as the scanner moves about the location.” (Nice job!) And then, instead of doing location scouting to imagine what filming will be like, “the director and cinematographer can go into the virtual location with VR headsets and do a virtual scout. Digital actors, props and sets are added and can be moved about and coverage is chosen during the virtual scout.”

Posted in software | Leave a comment