music: Lil Nas X and pop today

Meaghan Garvey wrote an insightful funny review of Lil Nas X for NPR that’s also about the state of pop music, and funny to boot!

“the exquisite schadenfreude of the biggest pop stars on the planet, including Drake and Taylor Swift, being denied the Hot 100’s No. 1 spot for three months and counting thanks to a cowboy novelty song from a former Nicki Minaj stan account operator. …
maybe for his next act he’d be a fishing boat captain, sending sea shanties with trap drums up the SoundCloud charts.”

(A “stan” is an overzealous or obsessive fan of a particular celebrity.)

Older folks, watch the video (Chris Rock!) to avoid being hopelessly out of touch 🤠🐎 The Genius lyrics site can explain some of the references such as “Lean all in my bladder.”

Posted in music | Leave a comment

music/audio: great recording of great playing

I listen to the NPR news summary on the main NPR page and the web page often features Tiny Desk concerts recorded at a not-so-tiny desk at some NPR office (and promoted by cute hedgehogs). Many are excellent. I grooved on Thundercat playing with Mac Miller, a guy I barely knew about, only to find he’d died.

What’s impressive about these live videos is the sound. Watch this thrilling performance by Tedeschi-Trucks band:

That’s 11 people including two drummers recorded in the middle of an office. It’s crazy how good it sounds! Listen to the turn starting around 9:20 from the sax solo that ends “Don’t Know What It Is” (by Kofi Burbridge who passed away in 2019), to Trucks’ sparse transition, to Tedeschi’s spine-tingling “Aooohuoooh” opening “Anyhow” (watch how she looks at him at 10:20 💕🎶), and into that building horn chart. You can hear everything by everyone as it crests.

Well, here’s why: two really good audio engineers, Josh Rogosin and Kevin Wait, a lot of high-quality sometimes expensive microphones, and a lot of experience putting them together. These live performances often sound better than the bands on the late night talk shows; performances on Jimmy Kimmel Live in particular often have thin hard-to-hear vocals.

Troll time

Someone commented on how great the sound is, and I made up this explanation, and someone else thought I was serious.

Watch the Tiny Desk videos where audio engineer Josh Rogosin talks about the care he takes selecting from a multitude of quality, some very expensive, microphones, then positioning them to get great sound from the middle of a ^%$#@! open-plan office. But… it’s all a lie! The beer bottles on the shelf behind Derek Trucks are disguised Helmholtz resonators. The shelves of vinyl LPs and books are actually CNC-milled diffusers, you never see an album out of place because they’re solid Sitka spruce. The desk divider in front of Susan Tedeschi is a $4,000 anechoic panel. The 3-D “Bob Boilen” and “AllSongs Considered” cut-outs are actually bass traps stuffed with shaved yak fur to tune ceiling reflections. Etc.

That is the only explanation why Tiny Desk Concerts consistently sound better than artists on Saturday Night Live, Jimmy Kimmel Live, and the other late-night talk shows. That “office” is a $2,000,000 recording studio. The truth is out there.

Posted in audio, music | Leave a comment

music/web: Sundays spent disambiguating

The Google Now screen on my phone does a good job presenting news relevant to me. It struck gold when it displayed “new album out now” by The Sundays, zOMG!! After 22 years, out of nowhere they deliver a new album around Harriet Wheeler’s astounding voice and David Gavurin’s chiming guitar work!

Oh noes, it’s actually an unrelated Japanese band.

SUNDAYS in 2016
These fine folk are SUNDAYS(サンデイズ)
press pic of The Sundays
but I don’t think they’re The Sundays

2nd example: Google Play Music, YouTube, and Genius show Korean songs by 코듀로이 (“corduroy”) as by the English acid jazz group Corduroy of whom I’m a big fan. Yes her name translates as “corduroy,” but she’s a different artist!

3rd example: GPM found a sweet cover of Burt Bacharach’s light adult pop song “Knowing When to Leave,” made in 1998 by Casino, which Google and English Wikipedia agree is a rock/alternative band from Birmingham. Crazy genre-defying work? I finally figured out that the British Casino didn’t even form until 2003 and the song is actually by an obscure Icelandic band also called “Casino” together with Páll Óskar Hjálmtýsson. It’s part of an entire album of sincere/camp/tongue-in-cheek recordings of late 1960s/early 1970s hip music that is not just in stereo, it’s called “Stereo.”

The band “Casino” but not THE band “Casino”

4th example, then I’ll stop: Google Now then alerted me to the new album by progressive rock masters Yes, named “Chet.” Well, my hero Steve Howe is a fan of Chet Atkins, so it’s possible…

 alt=

Nope, it’s obviously a different band. Come on, punctuation matters! Just because the band name has a comma in it is no excuse to get it wrong. I’m going to release music by “The Bea[Unicode ZERO-WIDTH NO-BREAK SPACE]tles” to see how many people I can scam 😉

Spotify is also confused about who made this:

search results for 'Yes Plis Chet'...
Comma? Ampersand? Confusion!

And though Amazon seems to know it’s by “Yes & Plis,” if you ask for more about the band you can tell Amazon is commingling the songs like a shelf of widgets in its warehouse ostensibly sold by different companies

Amazon Unlimited when you click the band name for 'Chet'...
All the classic rock plus one sore thumb

Get a Q!

I believe Google Play Music, YouTube, and these other services rely on what the music labels provide, and/or then just do a string search. But this doesn’t work when band names are translated into English, or have weird punctuation, or the band name contains another group’s name, or the band lazily/intentionally reuses an existing band name, …

If only there was a vendor-neutral way to identify and disambiguate entities in the world. Of course there is, Wikidata! “The Sundays” are the entity Q3122789 in Wikidata that is an instance of a band, and then some person or bot added another entity Q17231144 that is also an instance of a band, also labeled “The Sundays” (until I edited it, see below). Same name, two different things.

So these bands can be distinguished, but actually doing it is a hard problem as long as humans are in the loop: I don’t see Japanese press release writers writing “FOR IMMEDIATE RELEASE: The Sundays (Q17231144) release new album!” so that Google can disambiguate, nor will the people who translate that press release into English (whence I assume Google got excited on my behalf) add a note “not the Q3122789 English band.” 🙂 Moreover, as I’ve written before, I’m convinced Google doesn’t actually want a semantic web where web pages tell computers what they mean; it wants a messy confused bunch of pages so that it can apply massive AI to this kind of disambiguation, so that only Google can provide good context-specific answers to questions like “What’s the last album from the Sundays”? (Also, the moment you make it easier for pages to say what they’re about, immediately a bunch of boner and diet pill pages will semantically identify themselves as “Latest news about Kardashian family” or whatever is a popular search term.) But then it’s frustrating to see Google itself get it wrong.

What’s also frustrating is others fixed the Genius lyrics site to distinguish “Corduroy“, “Corduroy. (band)” [note the period, sic(k)!], and “Corduroy (Korea)“, but the same cleanup has to be repeated on every data-driven web site. Q numbers to rule them all!

Cleaning up Wikidata

Like Wikipedia, anyone can edit Wikidata information. The Japanese Wikipedia article that seems to have generated the duplicate “The Sundays” entity in Wikidata is actually titled SUNDAYS, so I changed the English label of Q17231144 to “SUNDAYS” and added the English description “Japanese rock band”; I also added the English description “1990s English alternative rock band” to Q3122789 to help avoid further errors.

What’s odd is the Japanese band’s Wikidata page includes a bunch of identifiers for the English band in other online databases: the VIAF identifier 126826622, the Bibliothèque nationale de France identifier 13926837j, the International Standard Name Identifier identifier 0000 0001 1087 4877, the Library of Congress authority ID n91122952, etc. All of the data in these other databases seems to apply to the English band, but all were missing from the English band’s Wikidata page. I suspect some automated bot found the Japanese “The Sundays,” incorrectly linked it to the VIAF identifier for the English “The Sundays,” and that in turn prompted other bots to add all those other identifiers to the wrong band. It seems poor design that an entity that obviously conflicts with another “band named ‘The Sundays'” entity gets all these automated identifiers for the other thing added to it.

The Icelandic group Casino doesn’t seem to have a Wikidata page… meanwhile Wikidata already has two “Casino” instances of a band, the well-known 2000s Casino from English Wikipedia and another from Dutch Wikipedia that describes a one-off British band. As with the two “Sundays”, they have overlapping external identifiers, in fact some bot mistakenly linked both of them to the Billboard artist page for an unrelated rapper who calls himself “Casino.” And on Google Play Music, the artist “Casino” identifies as the 2000s English “rock/alternative band,” but most of the songs and tracks are clearly by black rapper(s) who adopted the moniker “Ca$ino” without caring about existing European bands.

Posted in music, semantic web | Leave a comment

music: the Isley Brothers

3 + 3 > 5, ergo The Isleys were more talented than the Jacksons, QED.

On “The Highways of My Life” brother-in-law Chris Jasper is immortal on piano and ARP synthesizer. Wikipedia says Stevie Wonder was making Innervisions down the hall at the Record Plant at the same time and both worked with visionary synth engineers Robert Margouleff and Malcolm Cecil! Soul/R&B ruled 1973 (Court and Spark came out January 1974).

Posted in music | Leave a comment

eco: immediate action, disaster much later

We Have Five Years To Save Ourselves From Climate Change, Harvard Scientist Says

This is true, as is “We have 12 years to avoid catastrophe” and every other prediction of doom, but it is really hard to explain. We definitely won’t have 3 meters of sea level rise in 5 years or even by 2030. The climate, and shoreline, will likely be a bit worse than it is now. But without exceptionally rapid and drastic action now (that is, stop burning all fossil fuels in the next 5-12 years) we have locked in warming that will take us to that ultra-disaster of underwater cities and 200,000,000+ climate refugees in several steadily-worsening decades, and no one is sure when. At this point doubters conclude “They’re just fear-mongering liars, apocalypse is not actually going to happen in 5 years or even 12 years, nobody can really say, Algore said it would be awful years ago, still hasn’t happened.” The worse crisis is always about the apocalyptic climate N decades from now, unless we take drastic action that we should have started a decade ago. That’s a perfect recipe for defeatism and delay.

Children understand the problem, but a lot of well-meaning adults don’t 😢. Bill Nye swearing and blowtorching a globe isn’t going to change them, they’ll change when the present gets bad enough, when it will be far too late.

I tried to find models of the climate beyond 100 years from now. There aren’t any, because it’s so different from what we have now that the models don’t work. 😱 We don’t even know the current rate of warming because the climate zig-zags upwards. We’ll know in 20 years if it was 0.1 or 0.25°C per decade. Either will have been a disaster.

Posted in eco | Tagged | Leave a comment

web: the unparalleled genius of Sir Tim

It is 30 years since the invention of the World-Wide Web.

Tim Berners-Lee stood on the shoulders of giants, but the Web wasn’t just an amalgamation of existing ideas. He and Robert Cailliou created:

  • A HyperText Markup Language, human-readable but easy enough for machines to parse and generate. A web page is a complete HTML document.
  • A means of referencing HTML pages across the Internet, called Uniform Resource Locators. But a URL can refer to any kind of resource, not just web pages but also plain text, pictures, other files, and even notions like “the last 7 blog posts by skierpage.” (And actually URLs are a specialization of Uniform Resource Identifiers, that let you refer to other protocols, e.g. mailto:skierpage@example.com?Subject=hello .)
  • A specification of the protocol by which a client requests a URL from a server computer and the server responds with the document requested, called HyperText Transfer Protocol
  • Free open source software that implemented all this:
    • a software library implementing the protocol
    • software for an HTTP server, called httpd (hypertext transfer protocol “daemon”)
    • software to display and edit HTML pages; we now call the display part a “web browser.”

Nothing new here?

As this great 30-year summary makes clear, there was a ton of prior art.

Markup languages weren’t new

HTML identifies blocks of text as<P>(paragraph), <H1> (heading level 1), etc. and spans of text as <B> (bold), etc. The idea of marking up blocks of text instead of inserting typesetter codes for a particular printer wasn’t new, and HTML was a simplification of the existing SGML.

Hypertext wasn’t new

Hypertext wasn’t new. In fact in a related article John Allsopp says “Tim Berners-Lee proposed a paper about the WWW for Hypertext ’91, a conference for hypertext theory and research. It was rejected! It was considered very simple in comparison with what hypertext systems were supposed to do.” This is a fantastic historical footnote, and the conference organizers weren’t stupid!

The moment you put technical information on a screen, it is completely obvious that the reader should be able to jump to an explanation of a technical term, to jump from an entry in the table of contents or index to the section that’s referenced, and to jump from the phrase “See How to Install the Widget 9000” to… the How to Install chapter. In a former life writing technical documentation printed on paper I looked at electronically publishing manuals using hypertext systems like Folio and OWL. The programs that displayed hypertext resembled web browsers, they even had conventions like underlines for links and as I recall a back button to go back after following a link.

wish I had a screenshot of the help system itself

Yet another protocol…

Protocols to access remote computers over the Internet weren’t new. There was File Transfer Protocol to transfer files, Simple Mail Transfer Protocol to retrieve your new emails, and even a Gopher protocol to browse information on that computer. (Many people using the Internet at the time thought Gopher would be the glue linking between computers.)

At nearly the same time, “Wide Area Information Server (WAIS) is a client–server text searching system that uses the ANSI Standard Z39.50 Information Retrieval Service Definition and Protocol Specifications for Library Applications” (Z39.50:1988) to search index databases on remote computers.”

So what was new?

In a nutshell, linking within hypertext to another computer system, to possibly get more hypertext, blew people’s fragile little minds.

Those hypertext systems I mentioned operated within a local file. You opened Widget9000Setup.NFO in the viewer program and happily jumped around between sections, index, and paragraphs, but there was no “jump to manufacturer’s server on the Internet for latest service bulletins,” there was no “Here’s a hypertext list of other hypertext files on the Internet about Widget 9000 customizations.” The companies selling hypertext authoring software probably fantasized about getting thousands of people to buy their proprietary software to author their own parts of a federated set of hypertexts, but they didn’t have the vision, and a single commercial vendor would have really struggled to establish their file format as a network standard.

A server is hard but powerful

Because links in HTML can go to other computers, the Web requires a separate program running on that remote computer to respond to requests (although you can open local files on your computer in your browser without involving a server). The hypertext software makers must have laughed. “So in addition to our hypertext viewer program running on the user’s computer, you have to get the I.T. department to install and run an extra software daemon to respond to all these requests for bits of hypertext? That is the stupidest and most overkill approach imaginable! Just give people the file with all 70 pages and illustrations of our Widget 9000 instruction manual in it.” Remember, by definition there were no manufacturers’ web sites yet, and people and computers communicated over slow modems. Making requests to other computers was theoretically useful, but not just to get the next little chunk of hypertext.

Because everything Sir Tim developed at CERN was open source, and because the HTTP protocol was relatively simple, and because Unix was very common on servers, it turned out that having to run an HTTP server wasn’t a big barrier.

Uniform/Universal/Ubiquitous Resource Locator

The URL itself is genius. There were computers on the web that you could contact, mostly run by computer companies and universities and labs like CERN. Many of them supported File Transfer Protocol so you might be able to get a list of public files to download. Some of them even supported the Gopher and WAIS protocols I mentioned above that presented a friendlier list of files. But mostly if you connected to a computer, it was to login and type computer commands. You could imagine a hypertext page having a “check for Widget 9000 availability” link that would connect to the company’s server as a remote terminal and maybe even simulate pressing C(heck for inventory) then typing Widget [Tab] 9000 [Enter] – all the pecking away at keyboards that staff used to type when you asked for a book at the library or checked in to a flight. But the poor hypertext author would have to write a little script for every single computer server. A URL can encode the request as a single thing that fits into the HTML page, it’s like a hypertext system’s “Jump to the Troubleshooting section” link but infinitely richer.

It’s human visible

The Widget 9000 availability URL is probably quite complex, maybe http://acmecorp.com/coyote/stock check.asp?part=widget9000 . But you can see it in your browser’s location field, it probably makes sense, and is irresistibly tempting to fiddle with it, aka hack on it: what if if I substitute “ferrari355” for “widget9000”?

Similarly, you can view the source “code” of an HTML page. I taught myself the rudiments of HTML just by guessing or recognizing what tags like TITLE, P, A HREF=, etc. did. You could write the markup for something simple by hand; the home page of my web site and some other sections are still prehistoric hand-written HTML.. (Those golden days are gone now that most web pages are generated on-site by over-complex content management systems and each loads 10 JavaScript libraries and 7 ad networks and Like this on Facebook / Tweet this / Pin it buttons.)

The Web could subsume other systems

Because the client (usually a browser under the command of a person) makes requests to a server program, the Web can subsume or impersonate other systems. A simple computer program can output a Gopher category list or a directory listing as a basic HTML page with a bulleted list of links (more on this). As the Web gained mindshare among developers, people built the bridges to all the other protocols, and so a browser turned into a do-anything tool, and URLs became the lingua franca for any kind of request across the Internet. Thirty years of innovation built upon the simple clear underlying ideas of the Web.

Posted in web | Leave a comment

music: how to contribute scanned lyrics to the web

Lyrics are everywhere on the web, yet I regularly come across popular songs whose lyrics are nowhere to be found. Sometimes I have a CD or LP on the shelf with the missing lyrics printed in it! Time to make a little more of the sum of human knowledge available… Here are my notes on the process.

Where to contribute lyrics?

Many sites let users contribute and update lyrics. Ideally there would be a non-commercial user-supported über repository of lyrics, but if there is I can’t find it. All lyrics sites seem to be ad-supported (I don’t see the ads because I use the uBlock Origin ad-blocker). The worst are the sites which optimize their pages to fake out Google search so they show up high in search results for e.g. “Linx You’re Lying lyrics,” but when you visit them the page’s only content is just “Be the first to contribute the missing lyrics of You’re Lying by Linx! Kthxbye.

LyricWiki? (No)

The obvious contender is lyrics.wikia.com. It uses the same underlying MediaWiki software as Wikipedia, but it’s on the ad-supported Wikia platform that Wikipedia founder Jimmy Wales created. It has a genuine community trying to do a good job. I made many cleanup edits and added a few songs in 2008-2011. The problem with Lyric Wiki is nothing is created for you, you really have to create each page after page with wiki text. So (using the example of adding the lyrics of Max Tundra’s Mastered by Guy at The Exchange album): first you have to add a bit of fiddly markup to the band’s page for the album:

==[[Max Tundra:Mastered By Guy At The Exchange (2002)|Mastered by Guy at The Exchange (2002)]]==
 {{Album Art|Max Tundra - Mastered By Guy at the Exchange.jpg|Mastered by Guy at The Exchange}}
# '''[[Max Tundra:Merman|Merman]]'''
# '''[[Max Tundra:Mbgate|Mbgate]]'''
...

then you have to create the album’s page with more fiddly markup listing each song all over again:

{{AlbumHeader
 |artist    = Max Tundra
 |album     = Mastered by Guy at The Exchange
 |genre     = Electronic
 ...
 }}
# '''[[Max Tundra:Merman|Merman]]'''
# '''[[Max Tundra:Mbgate|Mbgate]]''
...

then you have to create a page for each song that points back to the album with even more fiddly markup, and then you provide the actual value, the lyrics themselves:

{{SongHeader
 |song     = Merman
 |artist   = Max Tundra
 |album1   = Max Tundra:Mastered By Guy At The Exchange (2002)
 |language = English
 |star     = Bronze
 }}
<lyrics> 
I'm feeling flirty
Must be you heard me
My knee is hurty
...

Even if you’re fluent in MediaWiki markup and templates, it is pointless error-prone duplication to keep repeating the artist, album, and track name on every page. Instead, adding a lyric should be a single database action that automatically adds the song to the artist’s page and the album’s page.

So Genius!

Genius came out of annotating rap lyrics. It has a nice interface for adding song lyrics to albums, a solid community, and lets people comment on songs and individual lines. So I went there.

Scanning and converting to text

On my all-in-one printer I scanned the record sleeves and CD booklets with the lyrics at high resolution and saved them as PDFs. Then I used gImageReader-qt5 for Linux to do optical character recognition. This works impressively well! It handled blue on pink text, it automatically identifies each block of text. Then delete the blocks you don’t want recognized, such as image captions and “Thanks to Kev and Fender guitars”. Then trigger OCR and it gives you a big chunk of recognized text.

Case conversion

Some lyrics that I scanned were printed entirely in UPPER CASE. There are many ways to convert case, but the wrinkle is I want the first sentence of each line to remain capitalized; also a bit of smarts about proper names, the word “I”, and such would be nice. I found the web page https://convertcase.net does the right thing in its Sentence case mode; it saved me hacking my own tool. The other nice thing about web-based converters is the textarea with the converted text is in the browser, and Firefox highlights many misspellings due to mis-recognition, such as “allbi” instead of “alibi.”

De-Unicode-ization

Genius wants simple ASCII for lyrics: simple quotation marks, hyphens not em-dashes, no ligatures like fi, etc. Unfortunately gImageReader doesn’t have an option to only output simple ASCII. To find the problematic characters, I used this command line to search for any character that isn’t ASCII.

rg ‘\P{ascii}’ *lyrics.txt

(rg is ripgrep, a better text search program than the venerable grep.)

My IQ goes up, the kudos roll in

To keep us unpaid suckers working, Genius has gamified (horrible word) contributions in the form of “IQ points.” When you add a wanted song, you get points. When you identify the song parts (verse, chorus, bridge, etc.) you get more points. More points give you more rights – I can now add a new song and edit a track list, but I still can’t add an entire new album or state that Peter Martin is commonly known as Sketch.

One of the problems I had with the lyrics for the band Corduroy is Genius already listed other songs by “Corduroy” that are by a Korean singer 코듀로이 (which translates to Corduroy) and a wannabe band that reused the name. Renaming artists is very tricky and way above my IQ level, but the forum participants are very helpful. “I have to say I am really impressed with the research you have done here. I will disambiguate the artists to fix this.” Awww.

(Elsewhere I blogged about the semantic confusion of translated band names matching other bands, names containing other bands, and straight up multiple bands with the same name.)

Posted in music, web | 2 Comments

art: Jhane Barnes inspires

I’ll pull out a Jhane Barnes shirt I haven’t worn in a while and I’m Ricky Fitts in American Beauty: “I need to remember… Sometimes there’s so much beauty in the world, I feel like I can’t take it.”

Jhane Barnes shirt, fabric woven in Japan

(This and black pants, perfect)

Posted in art | Tagged | Leave a comment

web: when it comes to paying, backward USA! backward USA!

Someone asked me to donate to the Center for Youth Wellness, a worthy cause. Give Lively, a free fundraising system, organizes the donations at https://secure.givelively.org/donate/center-for-youth-wellness/friends-name . The donation page’s “Suggested donation method” is “Donate by bank account” because “We get more from your donation when you pay via your bank account.”  The web page draws my bank’s logo and asks me to enter my bank username and password, but I am not on my bank’s web site.

WTF? y’all have got to be kidding me!

What’s actually happening is a third[*] company called plaid.com (never heard of ’em, some tech bro fintech startup), actually displays the form, technically within an <IFRAME> in GiveLively’s page. I don’t care what security promises plaid.com makes, they are completely insane if they think asking for my bank username and password on their (nested) web site is acceptable. Basic web security: never ever enter your username and password for one web site on another web site. If the browser’s location field doesn’t display yourbankname.com with a padlock icon, don’t do it! But much the same way every company says don’t trust links in our name that go to other web sites, until they send you a survey or ad that links to crappyThirdPartyMarketingCompany.com and hope you ignore their own advice, somehow it’s OK to fake customers out because it’s for a worthy cause.

It’s nobody’s fault, though Plaid sure has some chutzpah. Neither the worthy charity nor I want PayPal and some credit card company delaying funds and skimming off money from my charitable donation. Give Lively doesn’t have the in-house expertise to organize a bank transfer so they hand it off to Plaid. Plaid undoubtedly got frustrated trying to organize bank transfer with every stodgy bank under the sun, so they decided to present like my bank, ask for my login, and then order a low-cost Electronic Funds Transfer by impersonating  me.

But what makes no sense is why can’t I give my bank the same transfer instructions, the ones Plaid wants to make by impersonating me? Well, if I could then there’s less need for Plaid to be working away in the background. In every other developed country, you just log in to your bank and tell it to give money to any person or organization and It Just Works without any third parties or handing out your credit card details to strangers. I’m not sure why the USA makes it so complicated, to no one’s benefit but middlemen. There’s PayPal’s Venmo (big fees) and more banks are supporting Zelle but it feels that the USA is a decade behind.

Obviously I’m not the only person freaked out by this, see e.g. this Hacker News thread. To its credit Plaid has an open bug tracker on GitHub in which issue 68 is this “privacy/security concerns.” Huffington Post has an entire article about whom you should trust with your banking sign-on. That article says “[Plaid is] a system used by most personal finance apps, like Venmo, Robinhood and Acorns. Plaid, in turn, is trusted by a long list of banks and credit unions.” GiveLively responded on Twitter “Plaid is a secure & trusted industry-leading service that allows donations via bank account. Your bank is on the Plaid platform—like most US-based banks—because it trusts Plaid.” But I don’t see any indication on my bank’s site that it actually trusts Plaid. I just have to hope that if my bank didn’t trust Plaid (or more likely stopped trusting Plaid), it would revoke whatever API authorization it gave Plaid so transfers would fail. And again, a company using a third party that uses my credentials to ask my bank to do some transaction for me is completely back-assward . It only works this way because everyone involved is too ^%$#@! lazy to do the right thing which is: I go to my bank, authenticate myself, and tell my bank to give someone money using the bank details they gave me.

One other thing: Give Lively/Plaid’s interface defaults to making a monthly donation. If you don’t pay attention Plaid will be taking money out of your account forever even if that wasn’t your intent. Because you are not in the driver’s seat, the organization wanting your money is, and their temptation is overwhelming to tweak the system to maximize the amount you donate, including defaulting to monthly giving.

[*] Update: I was wrong, there’s a fourth company. Give Lively uses Stripe, Stripe takes 0.8%, Stripe uses Plaid. Crazy.

Posted in web | Leave a comment

software: how electronic medical records could be better

On an Ars Technica story on AI in hospitals, “goofazoid” commented

There are so many things that have set this environment up, mostly having to do with hospitals attempting to save money (nursing is usually the largest expense in a budget) by having fewer nurses. This has been compounded by data systems (like CHCS/Alta, Meditech) that are not user friendly and take longer to document in than a paper record would, while having lower fidelity than a paper record would.
I think that every nursing unit should probably have at least 2 more nurses/shift, and if you want really good documentation… switch to a tablet type device that can do many time saving things:
-use a smartcard and pin to log in
-use the camera to scan the pt armband so that you don’t have to search for your pt’s records
-use the camera to scan medication bar codes to document
-use the camera to scan blood products bar codes for safety and documentation (either two nurses with two different devices would scan to complete the documentation or the second nurse could scan their badge and type a pin)
-sync vital signs from the device measuring them to the chart
-gives lab results as soon as they are available, and uses the minimum number of alerts to minimize alarm fatigue
-uses the NFC to do things like program IV pumps (directly from the orders), collect amount infused, etc
-has a low power laser range finder to be used in conjunction with the camera to take pictures of wounds, dressings, drainage etc. (the range finder allows a 1x1mm grid to be accurately superimposed on the image)
-has the ability for nurses to use a BT headset to dictate notes rather than typing
on top of all of that, there needs to be a requirement that physicians use a system that is similar- NO F***ING PAPER! If orders are written on paper they must be transcribed and errors can then occur; same goes for phone orders. There is no reason that the MD/NP can’t put the orders in from a handheld device, even when offsite.
-have it set up so that if there is a lab out of range, it messages the provider who can then enter an order (triggering an alert for the nurse)
-have it set up so that nurses can text providers about issues that need addressed less than urgently
-allow diagnostic images to be viewed (EKG, x-ray, CT scan, MRI, Echo-cardiograms, sonograms etc)
The whole concept is to make the system user friendly, interconnected and safe…

My response:

Why aren’t Electronic Medical Records companies throwing money and resources at you to make all this happen?!!!

I recently read the great Atul Gowande’s Why Doctors Hate their Computers and it is so depressing how far these systems are from helping doctors do a better job: zillions of automatic alerts that everyone ignores, people Select all – Copy – Paste entire reports into fields instead of writing a summary, the choice to spend precious time with a patient staring at a screen or to spend hours at home doing data entry, … (He concedes the systems are benefiting patients who review their records.)

Here’s my idea. instead of every data entry field being a chore it should be a just-in-time avenue for understanding. If it’s a multiple choice, every previous entry in that field should be shown in a cloud showing the history and most common ones for this patient; if it’s a number, show a sparkline graph of previous readings that highlights diversions from typical results. Etc. It’s stupid to rely on doctors reviewing previous records, instead in real-time the systems should be showing trends and alerting non-standard data as people enter it.

And obviously, blockchain!

Posted in software | Leave a comment