eco: Congressman Mo Brooks can’t do math

This morning Google Now featured excitable stories “Bipartisan Panel of Scientists Confirms Humans are NOT Responsible for Past 20,000 Years of Global Warming.”

Indeed the brave iconoclast Mo Brooks (Alabama 5th District) on July 11th got Dr. Twila A. Moon to say “So, I would agree that when it began 20,000 years ago when we were coming out of the last glacial that was not caused by humans. The warming of the last 100 years, most certainly was.” Still, the latest denialist tack has a certain nihilistic appeal: if warming happens whether we’re around or not, what’s the big deal? Earth is just warming up out of an ice age, nothing to see here.

But what nobody seems to have noticed is the howler is in Mo Brooks’ framing remarks. Quoting from his own press release:

By way of background, during the last glacial maximum of roughly 20,000 years ago:

  • Average global temperatures were roughly 11 degrees Fahrenheit COLDER than they are today (per Zurich University of Applied Science). Stated differently, global temperatures have risen, on average, roughly 0.5 degrees Fahrenheit per century over the past 20,000 years.

See the problem? 10°F ÷ 20,000 = 0.0005°F per year = 0.05°F per century. He’s wrong by a factor of 10 about the warming per century! Stated another way, if temperatures had risen 0.5°F per century for 20,000 years as he claims, it would be roughly 100°F warmer now.

Please, if you know a journalist, help me expose this math-illiterate idiocy. This garbage is being promoted by Rush Limbaugh and a bunch of right-wing sites, it needs the same ridicule that Mo Brooks got last year for suggesting rocks falling into the sea are causing sea levels to rise.

Yes it’s warmed since the ice age. But the recent warming is of a different magnitude. NASA’s Earth Observatory summarizes it (emphasis mine)

As the Earth moved out of ice ages over the past million years, the global temperature rose a total of 4 to 7 degrees Celsius over about 5,000 years. In the past century alone, the temperature has climbed 0.7 degrees Celsius, roughly ten times faster than the average rate of ice-age-recovery warming.

Models predict that Earth will warm between 2 and 6 degrees Celsius in the next century. When global warming has happened at various times in the past two million years, it has taken the planet about 5,000 years to warm 5 degrees. The predicted rate of warming for the next century is at least 20 times faster. This rate of change is extremely unusual.

Finding a good graph

Alas that web site doesn’t have a graph of temperature over the last 20,000 years. If you zoom out too much the recent warming doesn’t show up: if you zoom in you can’t see the slow warming over millennia then a century of rapid warming on the same graph. The best medium-term graph I’ve seen is from the XKCD online comic #1732 (as the meme goes “Obligatory XCD:” I’ll end this by including it in full (I’m allowed to, it’s under a Creative Commons Attribution-NonCommercial 2.5 License).

Posted in eco | Leave a comment

audio: pavement back to the penthouse

Audiophile confession time: for months I’ve been listening to music on a $200 Jambox the size of a wine bottle, because one of my 40 kg VTL Signature 450 tube power amps blew up all my spare fuses for reasons as yet unknown.

Bryston and VTL power amplifiers

Friends loaned me a Bryston 3B amp (the smaller box in the pic) and… wow, baby’s back!! Bass is leaner, “polite,” but it’s part of the music extending octaves downward instead of a tubby sound effect. Stereo isn’t a left-right direction but life-size musicians spread out over 3 or more meters and sometimes front to back. And the Magnepan ribbon tweeters (from beautiful downtown White Bear Lake, MN) are lightning. Even at low volumes Joni Mitchell’s dulcimer is cold blue steel.

I enjoyed music fine without it, the change isn’t black & white to color, but the opening up in sound and space of a good hi-fi is a wonderful intensifier. Hear some before you die.

As to whether the solid state sounds better than 8 finicky Russian tubes… who knows? I am more and more distrustful of my ability to tell the difference between electronics. The Bryston amp in bridged mono mode is about 2.5 dB louder than the VTL so I turn the balance control towards the tube amp to roughly equalize their output levels, but even with a mono source switching between equally loud identical music, maybe the left speaker has better reverberation in my room, or my right ear is misshapen, or …

Posted in audio | Leave a comment

music: Lil Nas X and pop today

Meaghan Garvey wrote an insightful funny review of Lil Nas X for NPR that’s also about the state of pop music, and funny to boot!

“the exquisite schadenfreude of the biggest pop stars on the planet, including Drake and Taylor Swift, being denied the Hot 100’s No. 1 spot for three months and counting thanks to a cowboy novelty song from a former Nicki Minaj stan account operator. …
maybe for his next act he’d be a fishing boat captain, sending sea shanties with trap drums up the SoundCloud charts.”

(A “stan” is an overzealous or obsessive fan of a particular celebrity.)

Older folks, watch the video (Chris Rock!) to avoid being hopelessly out of touch 🤠🐎 The Genius lyrics site can explain some of the references such as “Lean all in my bladder.”

Posted in music | Leave a comment

music/audio: great recording of great playing

I listen to the NPR news summary on the main NPR page and the web page often features Tiny Desk concerts recorded at a not-so-tiny desk at some NPR office (and promoted by cute hedgehogs). Many are excellent. I grooved on Thundercat playing with Mac Miller, a guy I barely knew about, only to find he’d died.

What’s impressive about these live videos is the sound. Watch this thrilling performance by Tedeschi-Trucks band:

That’s 11 people including two drummers recorded in the middle of an office. It’s crazy how good it sounds! Listen to the turn starting around 9:20 from the sax solo that ends “Don’t Know What It Is” (by Kofi Burbridge who passed away in 2019), to Trucks’ sparse transition, to Tedeschi’s spine-tingling “Aooohuoooh” opening “Anyhow” (watch how she looks at him at 10:20 💕🎶), and into that building horn chart. You can hear everything by everyone as it crests.

Well, here’s why: two really good audio engineers, Josh Rogosin and Kevin Wait, a lot of high-quality sometimes expensive microphones, and a lot of experience putting them together. These live performances often sound better than the bands on the late night talk shows; performances on Jimmy Kimmel Live in particular often have thin hard-to-hear vocals.

Troll time

Someone commented on how great the sound is, and I made up this explanation, and someone else thought I was serious.

Watch the Tiny Desk videos where audio engineer Josh Rogosin talks about the care he takes selecting from a multitude of quality, some very expensive, microphones, then positioning them to get great sound from the middle of a ^%$#@! open-plan office. But… it’s all a lie! The beer bottles on the shelf behind Derek Trucks are disguised Helmholtz resonators. The shelves of vinyl LPs and books are actually CNC-milled diffusers, you never see an album out of place because they’re solid Sitka spruce. The desk divider in front of Susan Tedeschi is a $4,000 anechoic panel. The 3-D “Bob Boilen” and “AllSongs Considered” cut-outs are actually bass traps stuffed with shaved yak fur to tune ceiling reflections. Etc.

That is the only explanation why Tiny Desk Concerts consistently sound better than artists on Saturday Night Live, Jimmy Kimmel Live, and the other late-night talk shows. That “office” is a $2,000,000 recording studio. The truth is out there.

Posted in audio, music | Leave a comment

music/web: Sundays spent disambiguating

The Google Now screen on my phone does a good job presenting news relevant to me. It struck gold when it displayed “new album out now” by The Sundays, zOMG!! After 22 years, out of nowhere they deliver a new album around Harriet Wheeler’s astounding voice and David Gavurin’s chiming guitar work!

Oh noes, it’s actually an unrelated Japanese band.

SUNDAYS in 2016
These fine folk are SUNDAYS(サンデイズ)
press pic of The Sundays
but I don’t think they’re The Sundays

2nd example: Google Play Music, YouTube, and Genius show Korean songs by 코듀로이 (“corduroy”) as by the English acid jazz group Corduroy of whom I’m a big fan. Yes her name translates as “corduroy,” but she’s a different artist!

3rd example: GPM found a sweet cover of Burt Bacharach’s light adult pop song “Knowing When to Leave,” made in 1998 by Casino, which Google and English Wikipedia agree is a rock/alternative band from Birmingham. Crazy genre-defying work? I finally figured out that the British Casino didn’t even form until 2003 and the song is actually by an obscure Icelandic band also called “Casino” together with Páll Óskar Hjálmtýsson. It’s part of an entire album of sincere/camp/tongue-in-cheek recordings of late 1960s/early 1970s hip music that is not just in stereo, it’s called “Stereo.”

The band “Casino” but not THE band “Casino”

4th example, then I’ll stop: Google Now then alerted me to the new album by progressive rock masters Yes, named “Chet.” Well, my hero Steve Howe is a fan of Chet Atkins, so it’s possible…


Nope, it’s obviously a different band. Come on, punctuation matters! Just because the band name has a comma in it is no excuse to get it wrong. I’m going to release music by “The Bea[Unicode ZERO-WIDTH NO-BREAK SPACE]tles” to see how many people I can scam 😉

Spotify is also confused about who made this:

search results for 'Yes Plis Chet'...
Comma? Ampersand? Confusion!

And though Amazon seems to know it’s by “Yes & Plis,” if you ask for more about the band you can tell Amazon is commingling the songs like a shelf of widgets in its warehouse ostensibly sold by different companies

Amazon Unlimited when you click the band name for 'Chet'...
All the classic rock plus one sore thumb

Get a Q!

I believe Google Play Music, YouTube, and these other services rely on what the music labels provide, and/or then just do a string search. But this doesn’t work when band names are translated into English, or have weird punctuation, or the band name contains another group’s name, or the band lazily/intentionally reuses an existing band name, …

If only there was a vendor-neutral way to identify and disambiguate entities in the world. Of course there is, Wikidata! “The Sundays” are the entity Q3122789 in Wikidata that is an instance of a band, and then some person or bot added another entity Q17231144 that is also an instance of a band, also labeled “The Sundays” (until I edited it, see below). Same name, two different things.

So these bands can be distinguished, but actually doing it is a hard problem as long as humans are in the loop: I don’t see Japanese press release writers writing “FOR IMMEDIATE RELEASE: The Sundays (Q17231144) release new album!” so that Google can disambiguate, nor will the people who translate that press release into English (whence I assume Google got excited on my behalf) add a note “not the Q3122789 English band.” 🙂 Moreover, as I’ve written before, I’m convinced Google doesn’t actually want a semantic web where web pages tell computers what they mean; it wants a messy confused bunch of pages so that it can apply massive AI to this kind of disambiguation, so that only Google can provide good context-specific answers to questions like “What’s the last album from the Sundays”? (Also, the moment you make it easier for pages to say what they’re about, immediately a bunch of boner and diet pill pages will semantically identify themselves as “Latest news about Kardashian family” or whatever is a popular search term.) But then it’s frustrating to see Google itself get it wrong.

What’s also frustrating is others fixed the Genius lyrics site to distinguish “Corduroy“, “Corduroy. (band)” [note the period, sic(k)!], and “Corduroy (Korea)“, but the same cleanup has to be repeated on every data-driven web site. Q numbers to rule them all!

Cleaning up Wikidata

Like Wikipedia, anyone can edit Wikidata information. The Japanese Wikipedia article that seems to have generated the duplicate “The Sundays” entity in Wikidata is actually titled SUNDAYS, so I changed the English label of Q17231144 to “SUNDAYS” and added the English description “Japanese rock band”; I also added the English description “1990s English alternative rock band” to Q3122789 to help avoid further errors.

What’s odd is the Japanese band’s Wikidata page includes a bunch of identifiers for the English band in other online databases: the VIAF identifier 126826622, the Bibliothèque nationale de France identifier 13926837j, the International Standard Name Identifier identifier 0000 0001 1087 4877, the Library of Congress authority ID n91122952, etc. All of the data in these other databases seems to apply to the English band, but all were missing from the English band’s Wikidata page. I suspect some automated bot found the Japanese “The Sundays,” incorrectly linked it to the VIAF identifier for the English “The Sundays,” and that in turn prompted other bots to add all those other identifiers to the wrong band. It seems poor design that an entity that obviously conflicts with another “band named ‘The Sundays'” entity gets all these automated identifiers for the other thing added to it.

The Icelandic group Casino doesn’t seem to have a Wikidata page… meanwhile Wikidata already has two “Casino” instances of a band, the well-known 2000s Casino from English Wikipedia and another from Dutch Wikipedia that describes a one-off British band. As with the two “Sundays”, they have overlapping external identifiers, in fact some bot mistakenly linked both of them to the Billboard artist page for an unrelated rapper who calls himself “Casino.” And on Google Play Music, the artist “Casino” identifies as the 2000s English “rock/alternative band,” but most of the songs and tracks are clearly by black rapper(s) who adopted the moniker “Ca$ino” without caring about existing European bands.

Posted in music, semantic web | Leave a comment

music: the Isley Brothers

3 + 3 > 5, ergo The Isleys were more talented than the Jacksons, QED.

On “The Highways of My Life” brother-in-law Chris Jasper is immortal on piano and ARP synthesizer. Wikipedia says Stevie Wonder was making Innervisions down the hall at the Record Plant at the same time and both worked with visionary synth engineers Robert Margouleff and Malcolm Cecil! Soul/R&B ruled 1973 (Court and Spark came out January 1974).

Posted in music | Leave a comment

eco: immediate action, disaster much later

We Have Five Years To Save Ourselves From Climate Change, Harvard Scientist Says

This is true, as is “We have 12 years to avoid catastrophe” and every other prediction of doom, but it is really hard to explain. We definitely won’t have 3 meters of sea level rise in 5 years or even by 2030. The climate, and shoreline, will likely be a bit worse than it is now. But without exceptionally rapid and drastic action now (that is, stop burning all fossil fuels in the next 5-12 years) we have locked in warming that will take us to that ultra-disaster of underwater cities and 200,000,000+ climate refugees in several steadily-worsening decades, and no one is sure when. At this point doubters conclude “They’re just fear-mongering liars, apocalypse is not actually going to happen in 5 years or even 12 years, nobody can really say, Algore said it would be awful years ago, still hasn’t happened.” The worse crisis is always about the apocalyptic climate N decades from now, unless we take drastic action that we should have started a decade ago. That’s a perfect recipe for defeatism and delay.

Children understand the problem, but a lot of well-meaning adults don’t 😢. Bill Nye swearing and blowtorching a globe isn’t going to change them, they’ll change when the present gets bad enough, when it will be far too late.

I tried to find models of the climate beyond 100 years from now. There aren’t any, because it’s so different from what we have now that the models don’t work. 😱 We don’t even know the current rate of warming because the climate zig-zags upwards. We’ll know in 20 years if it was 0.1 or 0.25°C per decade. Either will have been a disaster.

Posted in eco | Tagged | Leave a comment

web: the unparalleled genius of Sir Tim

It is 30 years since the invention of the World-Wide Web.

Tim Berners-Lee stood on the shoulders of giants, but the Web wasn’t just an amalgamation of existing ideas. He and Robert Cailliou created:

  • A HyperText Markup Language, human-readable but easy enough for machines to parse and generate. A web page is a complete HTML document.
  • A means of referencing HTML pages across the Internet, called Uniform Resource Locators. But a URL can refer to any kind of resource, not just web pages but also plain text, pictures, other files, and even notions like “the last 7 blog posts by skierpage.” (And actually URLs are a specialization of Uniform Resource Identifiers, that let you refer to other protocols, e.g. .)
  • A specification of the protocol by which a client requests a URL from a server computer and the server responds with the document requested, called HyperText Transfer Protocol
  • Free open source software that implemented all this:
    • a software library implementing the protocol
    • software for an HTTP server, called httpd (hypertext transfer protocol “daemon”)
    • software to display and edit HTML pages; we now call the display part a “web browser.”

Nothing new here?

As this great 30-year summary makes clear, there was a ton of prior art.

Markup languages weren’t new

HTML identifies blocks of text as<P>(paragraph), <H1> (heading level 1), etc. and spans of text as <B> (bold), etc. The idea of marking up blocks of text instead of inserting typesetter codes for a particular printer wasn’t new, and HTML was a simplification of the existing SGML.

Hypertext wasn’t new

Hypertext wasn’t new. In fact in a related article John Allsopp says “Tim Berners-Lee proposed a paper about the WWW for Hypertext ’91, a conference for hypertext theory and research. It was rejected! It was considered very simple in comparison with what hypertext systems were supposed to do.” This is a fantastic historical footnote, and the conference organizers weren’t stupid!

The moment you put technical information on a screen, it is completely obvious that the reader should be able to jump to an explanation of a technical term, to jump from an entry in the table of contents or index to the section that’s referenced, and to jump from the phrase “See How to Install the Widget 9000” to… the How to Install chapter. In a former life writing technical documentation printed on paper I looked at electronically publishing manuals using hypertext systems like Folio and OWL. The programs that displayed hypertext resembled web browsers, they even had conventions like underlines for links and as I recall a back button to go back after following a link.

wish I had a screenshot of the help system itself

Yet another protocol…

Protocols to access remote computers over the Internet weren’t new. There was File Transfer Protocol to transfer files, Simple Mail Transfer Protocol to retrieve your new emails, and even a Gopher protocol to browse information on that computer. (Many people using the Internet at the time thought Gopher would be the glue linking between computers.)

At nearly the same time, “Wide Area Information Server (WAIS) is a client–server text searching system that uses the ANSI Standard Z39.50 Information Retrieval Service Definition and Protocol Specifications for Library Applications” (Z39.50:1988) to search index databases on remote computers.”

So what was new?

In a nutshell, linking within hypertext to another computer system, to possibly get more hypertext, blew people’s fragile little minds.

Those hypertext systems I mentioned operated within a local file. You opened Widget9000Setup.NFO in the viewer program and happily jumped around between sections, index, and paragraphs, but there was no “jump to manufacturer’s server on the Internet for latest service bulletins,” there was no “Here’s a hypertext list of other hypertext files on the Internet about Widget 9000 customizations.” The companies selling hypertext authoring software probably fantasized about getting thousands of people to buy their proprietary software to author their own parts of a federated set of hypertexts, but they didn’t have the vision, and a single commercial vendor would have really struggled to establish their file format as a network standard.

A server is hard but powerful

Because links in HTML can go to other computers, the Web requires a separate program running on that remote computer to respond to requests (although you can open local files on your computer in your browser without involving a server). The hypertext software makers must have laughed. “So in addition to our hypertext viewer program running on the user’s computer, you have to get the I.T. department to install and run an extra software daemon to respond to all these requests for bits of hypertext? That is the stupidest and most overkill approach imaginable! Just give people the file with all 70 pages and illustrations of our Widget 9000 instruction manual in it.” Remember, by definition there were no manufacturers’ web sites yet, and people and computers communicated over slow modems. Making requests to other computers was theoretically useful, but not just to get the next little chunk of hypertext.

Because everything Sir Tim developed at CERN was open source, and because the HTTP protocol was relatively simple, and because Unix was very common on servers, it turned out that having to run an HTTP server wasn’t a big barrier.

Uniform/Universal/Ubiquitous Resource Locator

The URL itself is genius. There were computers on the web that you could contact, mostly run by computer companies and universities and labs like CERN. Many of them supported File Transfer Protocol so you might be able to get a list of public files to download. Some of them even supported the Gopher and WAIS protocols I mentioned above that presented a friendlier list of files. But mostly if you connected to a computer, it was to login and type computer commands. You could imagine a hypertext page having a “check for Widget 9000 availability” link that would connect to the company’s server as a remote terminal and maybe even simulate pressing C(heck for inventory) then typing Widget [Tab] 9000 [Enter] – all the pecking away at keyboards that staff used to type when you asked for a book at the library or checked in to a flight. But the poor hypertext author would have to write a little script for every single computer server. A URL can encode the request as a single thing that fits into the HTML page, it’s like a hypertext system’s “Jump to the Troubleshooting section” link but infinitely richer.

It’s human visible

The Widget 9000 availability URL is probably quite complex, maybe check.asp?part=widget9000 . But you can see it in your browser’s location field, it probably makes sense, and is irresistibly tempting to fiddle with it, aka hack on it: what if if I substitute “ferrari355” for “widget9000”?

Similarly, you can view the source “code” of an HTML page. I taught myself the rudiments of HTML just by guessing or recognizing what tags like TITLE, P, A HREF=, etc. did. You could write the markup for something simple by hand; the home page of my web site and some other sections are still prehistoric hand-written HTML.. (Those golden days are gone now that most web pages are generated on-site by over-complex content management systems and each loads 10 JavaScript libraries and 7 ad networks and Like this on Facebook / Tweet this / Pin it buttons.)

The Web could subsume other systems

Because the client (usually a browser under the command of a person) makes requests to a server program, the Web can subsume or impersonate other systems. A simple computer program can output a Gopher category list or a directory listing as a basic HTML page with a bulleted list of links (more on this). As the Web gained mindshare among developers, people built the bridges to all the other protocols, and so a browser turned into a do-anything tool, and URLs became the lingua franca for any kind of request across the Internet. Thirty years of innovation built upon the simple clear underlying ideas of the Web.

Posted in web | Leave a comment

music: how to contribute scanned lyrics to the web

Lyrics are everywhere on the web, yet I regularly come across popular songs whose lyrics are nowhere to be found. Sometimes I have a CD or LP on the shelf with the missing lyrics printed in it! Time to make a little more of the sum of human knowledge available… Here are my notes on the process.

Where to contribute lyrics?

Many sites let users contribute and update lyrics. Ideally there would be a non-commercial user-supported über repository of lyrics, but if there is I can’t find it. All lyrics sites seem to be ad-supported (I don’t see the ads because I use the uBlock Origin ad-blocker). The worst are the sites which optimize their pages to fake out Google search so they show up high in search results for e.g. “Linx You’re Lying lyrics,” but when you visit them the page’s only content is just “Be the first to contribute the missing lyrics of You’re Lying by Linx! Kthxbye.

LyricWiki? (No)

The obvious contender is It uses the same underlying MediaWiki software as Wikipedia, but it’s on the ad-supported Wikia platform that Wikipedia founder Jimmy Wales created. It has a genuine community trying to do a good job. I made many cleanup edits and added a few songs in 2008-2011. The problem with Lyric Wiki is nothing is created for you, you really have to create each page after page with wiki text. So (using the example of adding the lyrics of Max Tundra’s Mastered by Guy at The Exchange album): first you have to add a bit of fiddly markup to the band’s page for the album:

==[[Max Tundra:Mastered By Guy At The Exchange (2002)|Mastered by Guy at The Exchange (2002)]]==
 {{Album Art|Max Tundra - Mastered By Guy at the Exchange.jpg|Mastered by Guy at The Exchange}}
# '''[[Max Tundra:Merman|Merman]]'''
# '''[[Max Tundra:Mbgate|Mbgate]]'''

then you have to create the album’s page with more fiddly markup listing each song all over again:

 |artist    = Max Tundra
 |album     = Mastered by Guy at The Exchange
 |genre     = Electronic
# '''[[Max Tundra:Merman|Merman]]'''
# '''[[Max Tundra:Mbgate|Mbgate]]''

then you have to create a page for each song that points back to the album with even more fiddly markup, and then you provide the actual value, the lyrics themselves:

 |song     = Merman
 |artist   = Max Tundra
 |album1   = Max Tundra:Mastered By Guy At The Exchange (2002)
 |language = English
 |star     = Bronze
I'm feeling flirty
Must be you heard me
My knee is hurty

Even if you’re fluent in MediaWiki markup and templates, it is pointless error-prone duplication to keep repeating the artist, album, and track name on every page. Instead, adding a lyric should be a single database action that automatically adds the song to the artist’s page and the album’s page.

So Genius!

Genius came out of annotating rap lyrics. It has a nice interface for adding song lyrics to albums, a solid community, and lets people comment on songs and individual lines. So I went there.

Scanning and converting to text

On my all-in-one printer I scanned the record sleeves and CD booklets with the lyrics at high resolution and saved them as PDFs. Then I used gImageReader-qt5 for Linux to do optical character recognition. This works impressively well! It handled blue on pink text, it automatically identifies each block of text. Then delete the blocks you don’t want recognized, such as image captions and “Thanks to Kev and Fender guitars”. Then trigger OCR and it gives you a big chunk of recognized text.

Case conversion

Some lyrics that I scanned were printed entirely in UPPER CASE. There are many ways to convert case, but the wrinkle is I want the first sentence of each line to remain capitalized; also a bit of smarts about proper names, the word “I”, and such would be nice. I found the web page does the right thing in its Sentence case mode; it saved me hacking my own tool. The other nice thing about web-based converters is the textarea with the converted text is in the browser, and Firefox highlights many misspellings due to mis-recognition, such as “allbi” instead of “alibi.”


Genius wants simple ASCII for lyrics: simple quotation marks, hyphens not em-dashes, no ligatures like fi, etc. Unfortunately gImageReader doesn’t have an option to only output simple ASCII. To find the problematic characters, I used this command line to search for any character that isn’t ASCII.

rg ‘\P{ascii}’ *lyrics.txt

(rg is ripgrep, a better text search program than the venerable grep.)

My IQ goes up, the kudos roll in

To keep us unpaid suckers working, Genius has gamified (horrible word) contributions in the form of “IQ points.” When you add a wanted song, you get points. When you identify the song parts (verse, chorus, bridge, etc.) you get more points. More points give you more rights – I can now add a new song and edit a track list, but I still can’t add an entire new album or state that Peter Martin is commonly known as Sketch.

One of the problems I had with the lyrics for the band Corduroy is Genius already listed other songs by “Corduroy” that are by a Korean singer 코듀로이 (which translates to Corduroy) and a wannabe band that reused the name. Renaming artists is very tricky and way above my IQ level, but the forum participants are very helpful. “I have to say I am really impressed with the research you have done here. I will disambiguate the artists to fix this.” Awww.

(Elsewhere I blogged about the semantic confusion of translated band names matching other bands, names containing other bands, and straight up multiple bands with the same name.)

Posted in music, web | 2 Comments

art: Jhane Barnes inspires

I’ll pull out a Jhane Barnes shirt I haven’t worn in a while and I’m Ricky Fitts in American Beauty: “I need to remember… Sometimes there’s so much beauty in the world, I feel like I can’t take it.”

Jhane Barnes shirt, fabric woven in Japan

(This and black pants, perfect)

Posted in art | Tagged | Leave a comment