software: misplaced Linux desktop angst, just make HTML5 great

Linux users and developers bemoan the few apps available for their desktop. Where are the app developers? is a typical lament. But every commercial next-gen platform going up against Apple/Android — BlackBerry X, Tizen, even Windows — has figured out how to deal with low developer interest: run HTML5 apps. Only the tiny Linux desktop community persists with the fantasy that “if we had better docs, or easier development tools, or less fragmentation, we’d get developers.”

Here’s my response to that particular post (damn site won’t accept OpenID responses):


You’re correct to identify Mozilla app store as an open alternative, but then you return to “Gnome apps”.  Focus on making Gnome an outstanding environment to run HTML5 apps, inside or outside a browser. Cheese, GCalctool, GCompris, Gnucash, Tomboy — anything not a hardcore system tool — are all going to fall behind HTML5 apps using the newest web APIs. Which is fine so long as those apps can run offline, don’t leak privacy, can interoperate using open file formats, etc.; so get ahead of the trend and encourage the HTML5 apps that adopt open source values, even though there’s nothing tying them to Gnome.


 

Posted in open source, software, web | 1 Comment

cars: BMW i8 climbs down from greatness

The good news is some version of the BMW i8 will probably enter production. It’s still a looker, but the EfficientDynamics Vision concept was beyond belief, easily the greatest concept car so far this millennium. (Watch the designers talk about their baby — damn BMW removed that video, here’s another not as good.) Obviously the concept’s clear lower door panels wouldn’t make it to production and the high-mounted mirrors were unlikely, but the pre-production also loses the ‘u’ laser headlights and the folded bodywork floating on the glass. For over €100,000 BMW should deliver more of the awesome.

Comparing a pre-production rendering with the identical view of the concept at BMW makes me sad:

Now the rear looks especially weak, the rear quarter lights resemble the 300ZX’s, the rear fenders lost the bulbous teardrop shape and to compensate the side profile from the doors rearward is a flame-surfaced mess. The front has an ugly black center opening under the BMW kidney grille and so the black cuts no longer look like roads curving into the distance. The Camaro’s headlamps look closer to the concept than these generic eyebeams. Etc.

I’d make a pilgrimage to see the original, like Guggenheim Museum Bilbao.

Posted in cars, design | 1 Comment

software: accelerating the development of EVERYTHING

Draft incomplete thoughts and missing links but I just had to get something out…

Marc Andreessen said software is eating entire industries, and from my own personal experience he’s really onto something.

Most heard about Facebook buying Instagram for one BILLION dollars, with predictable shock. NYTimes is 116 years old , Instagram is 557 days old with $0 revenue, and both worth $1bn (Instagram worth $77M per employee). But something they overlook is the mountains of stuff the 13 employees at Instagram adopted . The list of technologies they integrated to be able to serve 30M users is dizzying. Nearly all the software is free, and the services they pay for (storage, servers, etc.) are pay-as-you-go and cheap because they also leverage free software.

I assume any and all the improvements Instgram made to their tools are free and were developed in the openshared development. It would cost them more to keep them in-house. Anyone in the world can follow their steps to provide an app to 30,000,000 people.

…XX more on github and shared development

Rapid development spreads

Now it’s spreading to management. A guy runs a 40-person company remotely using online tools, web-based project management, private messaging to teams, etc. No human resources department, no middle managers, no waste.

You can see the tools and systems that software development is producing to improve & accelerate itself are spilling up into general management, and over into other industries. This is Andreesen’s insight. Links needed.  Will dinosaurs in construction/health care/ education notice the blurry figures zooming past them?

Accelerated software slims down and shakes off layers

The unavoidable reality for me is I made a decent living interfacing developers with the outside world: writing documentation, testing software before it goes out, providing tech support for outside developers, packaging SDKs (even burning the CD-ROMs), feeding bugs and enhancement ideas to them. But the “open” movement destroys the whole notion of outside developers. Link to old Cluetrain manifesto  And software development inevitably develops the tools to accelerate and minimize all the tasks that get in the way of pure programming, including the jobs I performed. Github does them for you:

  • the site integrates simple markup and implements a wiki so project developers can easily make their own online documentation
  • developers can skip the manual and view the code to figure how it works
  • anyone can contribute documentation
  • anyone can file an issue
  • the site builds a download of your project
  • And critically in all these areas, an outside developer can use git’s distributed development “Here’s my fix to the code/documentation/examples/tests, please review and incorporate it” so improvements can happen hourly.

I can code, so all these tools help me do a better job at it just as much as they help a stone-cold genius programmer. But my velocity remains underwhelming compared with a superstar. Every company is desperate to hire someone like Instagram’s 13, but the world is full of me.

My dream is I take my understanding of how rapidly systems can develop back to some stodgy institution (Gibson quote The future is here but it’s unevenly distributed)  and say “Talented people can pump out improvements unimaginably fast… I’m not super-talented myself, but I’ve seen how the greats do it and can encourage the process by writing things up on your wiki.”

Browser-based accelerates faster than all

This distributed development in the open benefits anything that can be expressed in 1s and 0s: not just software but textbooks, media, genetic sequences (!), computer-aided design, and soon even physical parts (with 3-D printers).

Almost without exception you’re using all these tools in a browser, so if the end result of accelerated development is something that itself runs in a browser, it’s completely friction-free. Every moment someone makes an improvement, you get a better web application.

Posted in software | 1 Comment

design: Downton Abbey’s sartorial nirvana

Downton Abbey is high-class underwritten clichéd claptrap (I thought the smug writer-producer mahvelling over his work in the BBC parody really was Julian Fellowes), but season 1 episode 5 achieves sartorial nirvana.  Lady Mary’s charcoal cutaway riding outfit, the men’s cream day suits, then at 0:41 housemaid Anna’s chevron-frilled linen coat and gray shirt blows the roof off.

chevron-frilled linen nirvana
C’mon baby, put some more clothes on!

Posted in design | Leave a comment

web: Wikipedia editing ideas

Wikipedia wants to attract and retain editors. “Ideas are cheap, implementation costs,” but here are my thoughts anyway as a long-time but sporadic contributor. For all I know these have been discussed at length and rejected or deferred…

New pages

The original WikiWiki vision was to encourage wiki links to nonexistent pages, and if you then follow the link you jump straight into editing the missing page. Wikipedia has interposed a “Wikipedia does not have an article with this exact name” page in this workflow, and disabled the creation step for users who aren’t logged in altogether, but it’s still emphasizing page creation.

With a mature web site, most of the time people shouldn’t create new pages. The information is there, but under a different name or more likely as a section on another page, or it doesn’t qualify for inclusion. My impression is most new pages get an immediate delete or flag as Article for Deletion, and that seems extremely unmotivating (judging from Slashdotters whining about their deleted pages).

Don’t delete new pages, redirect

=> Rather than deleting, encourage redirects to sections, make it easier.

(Does Wikipedia have a policy that every significant topic in the world should have a web page? Perhaps it should, especially for languages with less coverage than English Wikipedia. It seems forward-thinking, when projects like Wikidata will allow making statements at a finer granularity than existing wikipedia pages.)

=> Instead of deleting pages, maybe add to them a redirect to a more general article (the obscure character redirects to the TV series’ page, the minor local band redirects to the musical genre, the lesser novel redirects to the novelist). The redirect page could even keep the original text but show it in a pink section “This is {{reason for deletion}} so is not its own article, hence it redirects to [[Real page]], but here is the original material“. People would be free to link to this &redirect=no page, it would remain part of the Wikipedia site, but it would not be part of the Wikipedia corpus of knowledge presented to the world. This “thanks but redirect” makes deletion less a slap in the face than an “OK, but…”. It also solves the difficulty of  seeing a deleted page’s contents. It might be a good way to let pages develop.

Editing existing pages

References preview

I usually edit a section to avoid accidental changes to other parts and because it’s faster. But in section editing there’s no references/reflist, so you have no idea if your {{cite web}} works or not.

=> when previewing a section, add an opportunistic uneditable References preview (if any) {{reflist}} that shows references. (Trick: what about references that are only spelled out in other sections — <ref name="Other Section">?)

Accidentally edits to parts elsewhere

=> It would be good if preview summarized how many “lines” you changed/added, or even tried to tell what section(s) you changed. This would encourage better edit summaries. (Note most Wikipedia markup is paragraph-oriented not line-oriented, so a straight diff is often misleading; maybe there’s a sentence-oriented diff.)

cite-o-matic sentence editor

Often when I’m editing a page I’m just updating with some new information. Imagine an editor that only preps a sentence for inclusion. It starts with

As of June 2012, your text here {{citation needed}}

If you click on the {{citation needed}} it brings up an {{cite web}} template editor that helps you build the citation, and also encourages best practices for finding original sources, permanent link, removing extraneous query string parameters, etc.

The downside is this encourages fragmentary additions rather than article-wide holistic re-editing. But does Wikimedia Foundation want more editors or better page improvements?

Social page improving

Wholescale page improvements are daunting. They’re hard to get right and the instigator ought to be soliciting feedback and participation before submitting a major page reorganization.

The coding community has solved this. On github, you can suggest an improvement as an Issue, people can chime in, and you or others can develop it in your fork of the project, and invite review, and eventually someone can pull some or all of the discrete changes comprising the update into the main version. You can also run an Etherpad or Google Docs and have real-time archived comment and chat as you improve. It’s a collaborative community process. On MediaWiki, not so much:

  • you can make a version of the page as a subpage of your User:page/ , but it has no connection with the original
    • you can’t diff it with the original
    • nobody knows you’re working on it unless you mention it on the Talk page
  • there’s no “Talk page for proposed revision Xxx”
  • there’s no Page Review tool
  • there’s no way to apply a series of edits from your test version of a page back to the main page
  • there’s no Issue/Feature tracking for a page. (Do the current Article Feedback Tool ratings appear in a page quality section?)

 

Posted in web | Leave a comment

web: automating bits of Sprint Picture Mail

(The good programmer is lazy; instead of typing something over and over, she writes a script to automate the steps. The wizard programmer writes a set of object classes that model the problem at hand, and tells those objects to behave. The genius programmer writes a new language suitable for the problem domain.)

I wanted to automate getting info about my pictures from the soon-to-be-discontinued Sprint Picture Mail website.

The script so far

  1. Login at the  special log-in URL http://pictures.sprintpcs.com/downloadmymedia
  2. In a new tab, visit http://pictures.sprintpcs.com/ajax/view/getAlbumListResults.do?sortCode=5 which should produce a lot of [ {"panel":"albums", "url":  ..., stuff.
  3. Select the entire following one-line bookmarklet, copy it, switch back to the tab in  your browser where the [{“panel” blah blah stuff is, paste it into the location field, then press [Enter].
javascript:var linkWin=open("","albums","width=400,height=600,scrollbars,resizable"),lw=linkWin.document,isJSON=!0;try{albums=JSON.parse(document.body.innerHTML.replace(/\\'/,"'"))}catch(e){isJSON=!1}if(isJSON&&typeof albums=="object"&&Array.isArray(albums)&&albums.length>0){baseURL=document.baseURI.match(/(.*?\/\/.*?)\//)[1],lw.write('<base href="'+baseURL+'" target="_blank">\n'),lw.write("<ol>\n");for(i=0;i<albums.length;i++){var album=albums[i];lw.write("  <li>"+album.title.link("/ui-refresh/getMediaContainerJSON.do?componentType=mediaDetail&sortCode=17&containerID="+album.containerID)+" - "+album.coverTitle+"</li>\n\n")}lw.write("</ol>\n")}else lw.write("<p>window contents do not appear to be Sprint Picture Mail JSON info</p>\n"),lw.write('<p>Try visiting <a href="http://pictures.sprintpcs.com/ajax/view/getAlbumListResults.do?sortCode=5">http://pictures.sprintpcs.com/ajax/view/getAlbumListResults.do?sortCode=5</a></p>\n');

(If you want to read the code, here’s the original un-uglified easier-to-read version.) This should pop up a new browser window that either lists your Picture Mail albums, or tells you what to do.

Then for each album,

  • right-click the link and Save Link As.. Album name.json. This file contains details for each picture in the album (description, creationDate, etc.)
  • If it’s got less than 50 pictures and videos combined, then back in your SPM tab, drag the album’s thumbnail into ‘Download to my PC’

Bookmarklet background

Once you’re logged in to SPM, a bookmarklet can modify the contents of a web page to do something useful.

Googling found a suitable bookmarklet half-way down a page of bookmarklets from 2003:

javascript:WN7z=open('','Z6','width=400,height=200,scrollbars,resizable,menubar');
 DL5e=document.links;with(WN7z.document){write('<base
 target=_blank>');for(lKi=0;lKi<DL5e.length;lKi++){write(DL5e[lKi].toString().link(DL5e[lKi])+'<br><br>')};void(close())}

Copy and paste this babble into your location bar and press enter, and it pops up a new window that lists all the URLs on a page!

It’s hard to read because a bookmarklet is a URL, just like http://example.com/path but starting with javascript: , so it has to be a single line. First thing to do is to find a tool to convert between compressed rubbish and a decent multi-line program… UglifyJS. Yay, more tools in JavaScript, it’s taking over the world.

npm install uglify-js
sed 's/javascript://' < pcreview_bookmarklet \
  | ./node_modules/uglify-js/bin/uglifyjs -b \
  > spm.js

Now I’ve got a multi-line JavaScript file. To turn this back into a bookmarklet,

./node_modules/uglify-js/bin/uglifyjs spm.js \
   | sed 's/^/javascript:/'

I then paste the contents of the resulting single-line JavaScript into my browser’s location bar; to automate this step, pipe it into a clipboard utility such as xsel -b.

Now, to do the coding…

Generating an album list

Once logged in, a useful URL to visit is http://pictures.sprintpcs.com/ajax/view/getAlbumListResults.do?sortCode=5

this returns a JSON structure of all my picture albums. To parse it, just

albums=JSON.parse(document.body.innerHTML);

this fails! Sprint took the title of one album, 2512 house (DL’d!), and turned it into

"title": "2512 house (DL\'d!)"

but this is invalid JSON, you don’t put a backslash in front of single quote.

OK, so

albums=JSON.parse(document.body.innerHTML.replace(/\\'/, "'"));

Now iterate through this, for each album fabricate

  1. a  link to its details
  2. and if it has fewer than 50 pictures, a link to download it.

The first is easy, since my research found the URL that returns the album detail is just /getMediaContainerJSON.do?componentType=mediaDetail&sortCode=17&containerID=MMMMMMMM)

The second is much harder, as SPM seems to fabricate a hidden form that enumerates the elementIDs and extensions of pictures to download, and then dynamically modifies this to turn it into a POST to a /downloadzip/sprintpictures.zip?machineid=xxxx URL. This is hard to do with a bookmarklet.

Parsing an album’s media details

Once you’ve saved an album’s details as Album name.json  you want to walk the structure looking for interesting info and warning about unexpected values. You could do this in another browser bookmarklet, but I want to change the names and modification times of files based on the info, and you can’t perform those operations in browser JavaScript. But the nice node.js engine can. The script is here.

Attempts  to trigger downloads

Getting album and picture metadata is nice, but the most time-consuming part is selecting and dragging albums and pictures to [Download to my PC].

I started writing a bookmarklet that would make SPM’s download form’s hidden form fields visible and label them, so you could then fill it with your own element IDs and drive your own arbitrary download. But you’d have to interactively step in to provide the machine ID, download URL, etc.

It would be better to do it by driving the whole process from node.js. The script takes a login and password, supplies them in a request to some SPM login URL, then remember cookies sent back by SPM and uses them in subsequent requests. Useful links for this: How to POST in node.js, logging in with cookie.

But I can’t figure out SPM login, it’s a bear. Their login form’s <noscript> action is to call authenticate.jsp, but instead it does an AJAX in-page call to a different login URL, which returns script to check if cookies are enabled and then redirects. I tried both URLs using curl and node.js and even though I get a pmjsessionid cookie back, my subsequent requests fail. SPM must be looking for user agent or a particular sequence of requests, or something else. Help wanted!

To Dos

These kinds of explorations reveal shortcomings in the source material. In an open source read-write web, you can fix them:

  • Rewrite the Wikipedia bookmarklet article: “it has been found that passive voice is to be avoided”, also you can just paste the code in without creating a bookmark
  • File an issue for UglifyJS to support a –bookmarklet option to prepend javascript: .
  • UglifyJS needs a –help arguments option.
  • So why isn’t node.js built in to my browser? It’s all JavaScript.
Posted in software, web | 2 Comments

web: getting your pictures off Sprint Picture Mail

Sprint Picture Mail was an easy way to upload pictures on a feature phone to a web site where you could organize them into albums. But Sprint was never a serious player in the “online photo site” business. I criticized its poor UI back in 2006, then they made some improvements that only worked in Internet Explorer, and more recently they made it into a single-URL gallery-viewer style application. Then Sprint decided to pack it in.

Amazingly, Sprint never texted my phone to say “We’re discontinuing Sprint Picture Mail”! I happened to glance at their irritating SprintZone message center on my smartphone and noticed a message about my e-mail address @pm.sprint.com going away (one I’ve never used), which led me to the announcement that the actual site is discontinuing as of April 30, 2012… which I read in May! Fortunately they have extended “Last Chance” site access for a few more weeks until June 18, 2012.

Sadly Sprint makes it hard to just grab all your pictures and videos, and you’ll lose some information without some extra work. I couldn’t find any information or a script to do the right thing, so here are my ongoing notes on the process. Please comment.

Downloading the images

Sprint has a FAQ/guide with information about downloading, but it has some omissions.

  • The front page and usual log-in URL to the site just say it’s been discontinued. You must use the special log-in URL http://pictures.sprintpcs.com/downloadmymedia
  • The easiest way to grab your pictures is to drag the thumbnail of an album from “My Picture Mail Albums” to the “Download to my PC” box at the top right, but this only works if the album has 50 or fewer photos! Also, this may not work for your Upload folder.

If an album has more than 50 pictures, it’s probably easiest to click the album thumbnail, then:

  1. Ctrl+Click each thumbnail on the page until all 21 are selected
  2. Drag the group to the “Download to my PC” box.
  3. Name the zip file Album_name_page1.zip.
  4. Advance to the next page and repeat, naming the download Album_name_page2.zip

After I’ve downloaded all the pictures in an album, I rename it on SPM to Album name (done).

File names

The individual pictures in the downloaded ZIP file get a random NNNNNNN.jpeg (or NNNNNNN.mov for videos) file name. It corresponds to the image’ identifier on SPM; while logged-in, you can view this picture at http://pictures.sprintpcs.com/i/NNNNNNN (the extension you use doesn’t matter). Normally when you view pictures on the site the image URL has stuff on the end of this specifying a size and quality and other things, e.g. /i/NNNNNNNN_75_0.jpg?outquality=56&ext=.jpg&255,255,255,1,0,0,0,0&squareoutput=245,245,245&aspectcrop=0.5,0.5,1.0,1.0,1.0

The image you see at the bare img URL is almost the same as the image file in the ZIP download, but note that it isn’t. Using the fabulous ImageMagick identify command-line tool with its -verbose option, I found an image saved this way had a quality of 90, while the downloaded ZIP had a quality of 93. So, don’t use a browser extension like “DownThemAll” to simply grab pictures from the site, you have to drag thumbnails to Download to my PC.

Unfortunately Sprint’s file name is not the file name from the camera, e.g. one featurephone I owned would name a picture taken on June 29 “2010 100629_152333.JPG”. This makes life tough if you grabbed some pictures off your featurephone by plugging it in over USB or using Bluetooth file transfer; you’ll already have some of the pictures from SPM, but under different names.

Lost metadata

JPEG files on your camera can contain all kinds of metadata: the date you took the picture, whether it’s rotated, the camera settings, sometimes the location you were in. (When you upload pictures to Facebook you should remove this identifying material, I use ImageMagick’s convert -strip option or Photoshop Element’s Save for Web….) Unfortunately I don’t have pictures from these cameras to learn which details, if any, Sprint PM tosses.

The picture files in the ZIP all have a modified date of today’s date, which sucks. They ought to have the date the files were uploaded, if not the date of the picture. The good news is the JPEG image data seems to preserve the metadata:

    exif:DateTimeDigitized: 2010:06:29 22:23:48
    exif:DateTimeOriginal: 2010:06:29 22:23:48

but the bad news is, a lot of my camera snaps don’t have this information.

Fork over the date and caption!

But SPM knows the date of a picture. If I go into the picture gallery, it shows a date for each picture, which I think is the date the picture was uploaded to SPM.

The other killer problem is, you lose any caption you put on Sprint’s site! Some of my uploaded pictures automatically got a caption of the camera’s filename, and others I had patiently named. Sprint doesn’t add this metadata to the jpeg.

So, a better workflow would be

  1. download groups of pictures
  2. open the album
  3. advance through each picture
  4. note the caption of the picture and the date taken.
  5. Rename the corresponding downloaded file to match the caption, and change its modification time to reflect the date taken. And/or store this metadata in your picture gallery of choice (e.g. Windows Explorer’s File Properties).

This sort of “screen scraping” is ripe for automation. But here’s where you run into a problem with fancy web applications: as you click on albums and advance through their pictures, the URL at pictures.sprintpcs.com doesn’t change. You can’t feed a set of URLs into a command-line tool like curl or wget and then look for the caption and date, because it’s all done through onclick JavaScript wizardry at a single URL.

There must be a solution to this; whatever your browser does to get info from Sprint’s site, you can always use a computer to fake the same request. (Remember the scene in The Social Network where Mark Zuckerberg/Jesse Eisenberg codes up a Perl script to pull all the girls’ pictures from Harvard’s network?) So I used the excellent Firebug add-on for the Firefox & SeaMonkey browsers to watch my browser’s requests going to Sprint in its “Net” pane. Each time you click to advance the slide carousel, it makes a request similar to

http://pictures.sprintpcs.com/ajax/sessionExpiryCheck.do?target=/ui-refresh/getMediaContainerJSON.do%3FcomponentType%3DmediaDetail&sortCode=17&count=10&containerID=MMMMMMMM&elementID=NNNNNNNN&rnd=xxx.xxxxxx&offset=20&count=10

Sure enough, if you copy this URL into another tab,  you get a nice data structure giving details of your pictures, and crucially, the captions for them. Here’s the info for one gallery item:

{
 "elementID":NNNNNNNN,
 "containerID":"MMMMMMMM",
 "hasTransform":false,
 "creationDate":"Nov 9, 2005",
 "albumName":"Friends",
 "mimeType":"image\/jpeg",
 "audioContainerID":null,
 "hasVoiceCaption":"false",
 "audioElmtID":"",
 "description":"John Doe at reception",
 "URL":
  {"audio":"",
   "thumb":"\/i\/NNNNNNNN_40_1.jpg?outquality=56&ext=.jpg&255,255,255,1,0,0,0,0&squareoutput=255,255,255&aspectcrop=0.5,0.5,1.0,1.0,1.0",
   "image":"\/i\/NNNNNNNN_0_1.jpg?outquality=56&ext=.jpg",
   "video":"\/m\/ZZZZZZZZ_0.mp4v-es?iconifyVideo=false&outquality=56&contenttype=application\/x-quicktimeplayer"
  },
 "mediaItemNum":25,
 "mediaType":"IMAGE"
}

But there’s no need to be restricted to the URLs the web page emits. You can mess with the query string of this URL. Remove the expiryCheck wrapper, simplify the parameters (I use the URL Decoder/Encoder utility web page to convert %3D stuff into readable characters), change the offset to offset=0, and remove both count= values, and you’ll get all the info for every picture in the album. (Well, after several thousand empty whitespace characters, because the SPM programmers must be using some combo of templating and programming, and weren’t careful to disable output.)

This structure is in JSON format, and there are online viewers that present it in a more user-friendly format, e.g. http://jsonviewer.stack.hu/

I don’t know about the audio annotation and the video link. And I’m not sure whether any image effects or cropping you do on Sprint’s site affect the downloaded image, presumably that’s the hasTransform field.

Getting the albums

I have three pages of albums. The key seems to be the containerID. Maybe there’s a way to get that. Again, using Firebug’s Net pane, I see

  1. http://pictures.sprintpcs.com/ajax/view/messages/getUserAlbumList.do?tabName=userAlbums&random=0.5704244691070494&initialAlbumId=TBD&mediaCount=TBD
    but that just creates an empty UI for albums.
  2. Next an
    http://pictures.sprintpcs.com/ajax/sessionExpiryCheck.do?target=/ajax/view/getMediaItemsForCarousel.do…
    that seems to get a similar JSON structure for the items in my default album, “Uploads”
  3. Then a
    http://pictures.sprintpcs.com/ajax/sessionExpiryCheck.do?target=/ajax/view/getAlbumListResults.do%3FsortCode%3D5%26numToDisplay%3D12%26containerID%3D%26thumbsoffset%3D0%26offset%3D0%26count%3D11&rnd=xxx.xxxxxx
    That’s the call, it returns album information.

You can get rid of the sessionExpiryCheck that’s wrapping the actual call. And as with the URL for an album’s details, take out the count, in fact take out everything, and the following simple request

http://pictures.sprintpcs.com/ajax/view/getAlbumListResults.do?sortCode=5

returns a structure listing all my photo albums, e.g.

...{
 "panel":"albums",
 "url": "javascript:YAHOO.com.sprint.pm.carousel.carouselFunctions.getMediaThumbnails('MMMMMMMM','24');",
 "imgSrc": "/i/NNNNNNNN_60_1.jpg?outquality=56&ext=.jpg&aspectcrop=0.5,0.5,1.0,1.0,1.0&showCover=main",
 "containerID": "MMMMMMMM",
 "shared": "false",
 "mediaCount":"24",
 "typeCode":"ALBUM",
 "title":"House Ideas",
 "trunTitle":"House I...",
 "coverTitle":"24 Picture(s) Updated Oct 5, 2005",
 "creationDate":"Oct 5, 2005",
 "modificationDate":"Oct 5, 2005"
}...

Alright!

For each containerID in the list,

  • request
    http://pictures.sprintpcs.com/ui-refresh/getMediaContainerJSON.do?componentType=mediaDetail&sortCode=17&containerID=MMMMMMMM
  • if the album has less than 50 pictures, drag it to the [Download to my PC] box on SPM,
  • otherwise … maybe we can automate the download of pictures from the carousel

Triggering the download (incomplete)

Again using Firebug’s Net panel to watch the requests, when I drag a small album to the “Download to my PC” box, it requests

http://pictures.sprintpcs.com/ajax/sessionExpiryCheck.do?target=/ui-refresh/download/albumMedia.do%3FcontainerID%3DMMMMMMMM

This triggers a confirmation box telling you what to do with the download. The confirmation box also has a hidden form filled with the names and extensions of the pictures in the album, so that when you click [OK] it POSTs a request to

http://pictures.sprintpcs.com/downloadzip/sprintpictures.zip?machineid=wmtp014&

containing this information

NNNNNNNN ext=.jpeg
elementID=NNNNNNNN

fileName=sprintpictures
type=zip

and Sprint’s response to this is to supply the ZIP file.

The second time you drag, the download seems to trigger immediately. The URL requested is

http://pictures.sprintpcs.com/ajax/sessionExpiryCheck.do?target=/ui-refresh/download/albumMedia.do%3FcontainerID%3DMMMMMMMM

If you issue this URL request on its own, it appears to do nothing. The server responds with a web page containing just a form with the names and extensions for the pictures in the album, but the form lacks a submit button. SPM obviously expects to get this form from the server and then complete it. Automating or simulating a form submission is doable, but it’s harder than issuing a simple get request.

Other information lost

SPM can generate thumbnail for every video, something that seems beyond Windows and Linux desktops. This thumbnail isn’t included in the ZIP file when you [Download to my PC]. It is mentioned in the gallery item info in the album details, as both

"thumb": "/m/NNNNNNNN_40.mp4v-es?iconifyVideo=true&outquality=56&255,255,255,1,0,0,0,0&squareoutput=255,255,255&aspectcrop=0.5,0.5,1.0,1.0,1.0",
"image": "/m/NNNNNNNN_0.mp4v-es?iconifyVideo=true&outquality=56"

The first is a 40 pixel square thumbnail, the latter is larger. Leaving off the quality= increases image size.

It might be nice to get this video still, but the VLC media player can take a snapshot.

As you can see from my excerpted album details , SPM stores a voiceCaption and audio information. This probably doesn’t come with the download, so I should check to see if any of my pictures have these.

Understood, and slightly automated

I’ve written a bookmarklet that transforms the info about all your albums at http://pictures.sprintpcs.com/ajax/view/getAlbumListResults.do?sortCode=5 into a human-readable list of albums with links to download their metadata. If you know what a bookmarklet is, see my notes on this approach.

This could be completely automated with a script that logs in to SPM, remembers the cookie SPM sends back that authorizes your session, then requests the list of albums, then steps through requesting metadata and triggering downloads, and then even changes the date and titles of your pictures. Hmmm, there’s a tutorial for doing this sort of thing using the excellent node.js server. Again, see my fumbling notes and steps at this approach. Yet another approach is to write a web page that makes XMLHTTPRequests.

Posted in web | 7 Comments

Anthony Lane, funniest writer at The New Yorker (more!)

Irrepressibly droll.

On 20-kilometer walk competitors at the Beijing Olympics:

They will continue to propel themselves, year in, year out, as if learning to moonwalk too soon after a hip replacement.

On Yoda  (Space Case, “Star Wars: Episode III”):

Also, while we’re here, what’s with the screwy syntax? Deepest mind in the galaxy, apparently, and you still express yourself like a day-tripper with a dog-eared phrase book. “I hope right you are.” Break me a f***ing give.

An aside from a surprisingly enthusiastic review of “Anvil! The Story of Anvil”:

Specialists might prefer to file them [Anvil] under thrash metal, that delicate subset of the genre, but “Anvil!” is wise enough to steer clear of such hairsplitting, not least because, in a world where most of the guitarists look like exploded spaniels, there is an awful lot of hair to split.

It was a stroke of genius to send him to the Eurovision Song Contest, and he *kills*. Even if you can’t sing “Dinge Dong” and “Waterloo”, and never saw Bucks Fizz rip their skirts off, it’s hilarious:

She [Celine Dion early in her career] looked like a naval officer trying to mate with a lampshade.

He rejects amped-up, choppily-edited incoherent movies and (correctly) rails against the increasing pornography of violence that movies wrap in comic book form. But he’s no Andy Rooney. From a review of Red:

Why should our mature, more thoughtful citizens be expected to watch loud films full of muscular men in their twenties shooting each other and blowing stuff up? What manner of challenging drama would the middle-aged prefer? And the answer is :loud films full of muscular men in their fifties shooting each other and blowing stuff up.

Describing how Robert Redford and his director of photography light scenes in The Conspirator:

I was hoping that Redford had exhausted his love of soft gilding in “A River Runs Through It” (1992), better known as “The Vaseline Rubs on It,” but the new film bathes in the stuff.

In a sweet but not cloying piece about life at Pixar:

[Chief creative officer John] Lasseter became a skipper on the Jungle Cruise, at Disneyland, still one of the best preparations for a life in the movie business, where the crocodiles wear suits.

Madonna’s “W.E.” get another zinger:

like all royal sagas, including “The King’s Speech,” this film is determined to present the British princes as handsome devils, whereas, in reality, they were bred to look like basset hounds with indigestion.

More

The convoluted spy thriller “Red Sparrow”:

… C.I.A. agent, Nate Nash (Joel Edgerton), who is handling a Russian mole. Nate’s bosses, however, alert to Dominika’s game, order him to entrap her, so that she can be coaxed into spying for the Americans. The plot burrows this way and that, and the mole-work grows so frantic that the movie starts running out of lawn.

Posted in writing | Leave a comment

audio: DSD digital nirvana approaches

Four years ago I blogged about wanting “the master tape,” the exact version that the musicians are hearing in the control room.

Since then there’s been some progress.

1. High-res digital tracks for download

Although Best Buy long ago junked their shelves of DVD-Audio and Super Audio CD “better than CD” disks, HDtracks will sell you high-resolution audio files. I can buy Steely Dan’s Gaucho at 96kHz/24bit for $18.

2. Better quality compressed music

Meanwhile Apple’s “Mastered for iTunes” program is encouraging producers to start from higher-than-CD resolution digital files when they create the compressed 256kbps files sold on iTunes, for alleged better quality. And Apple hints at delivering higher resolution to users: “As technology advances and bandwidth, storage, battery life, and processor power increase, keeping the highest quality masters available in our systems allows for full advantage of future improvements to your music.”

3. Audiophile digital file playback

There has also been audiophile embrace of playback of digital files stored on a computer. Instead of claiming huge sound differences between $3,000 and $30,000 CD players, audiophile reviewers now rhapsodize over how a $170 USB cable sounds better than the one that came with your phone, or how a digital track sounds better if the USB cable goes into a $1000 box to convert into the S/PDIF digital format before plugging into a DAC that actually converts the 1s and 0s to analog audio.

Meanwhile the ultimate format advances…

However, the consensus remains that the DSD format used on SACDs sounds better than the PCM audio format used on DVD-As and high-resolution downloadable tracks, no matter how high you crank the kHz and bit depth of the latter. Sony has been predictably clueless in failing to promote its DSD format or extend its use beyond the nearly defunct SACD disk (though the PS3SACD site claims SACD releases are picking up steam); consequently to enjoy this ultimate format audiophiles have had to borrow hard disk recorders from recording studios, use a digital hand-held recorder, or burn special PS3 Bluray disks. But it seems:

  1. Personal computer software that can store and transmit a DSD file,
  2. a transfer format to send the DSD 1s and 0s over USB,
  3. and consumer audio equipment that can decode DSD

are arriving. There’s a great article in Positive Feedback about this progress.

With Amazon, Apple, and Google all giving away free tracks, regularly offering classic albums dirt cheap, and offering any song you hear and like for $1.29 or less, my library of digital-only music is steadily growing. (I know it’s cheaper still and probably higher-quality to buy a second-hand CD, rip the songs, then sell it, but it bothers me that the artists don’t get money.) I was considering a box like the Cambridge Audio DacMagic Plus to play songs from my computer over USB and songs from my smartphone over Bluetooth on my stereo, but now there’s a good reason to wait for playback of DSD masters. It used to be that a handful of audiophiles would get a reel-to-reel copy of the studio master tape through unofficial channels, but soon the “golden master” will be available to anyone!

While waiting for the future to arrive I plug my smartphone’s headphone out into my pre-amp’s line in, the low-quality un-digital un-audiophile playback method. The best is the enemy of the good.

Posted in audio | Leave a comment

music: “Don’t Disturb this Groove” near-great 12-inch

The System’s “Don’t Disturb this Groove” is a monster jam, filled with lush melodic touches and impassioned vocals with inventive vocal overdubs.So the 12-inch should be one of the greatest of all time!?

Alas, nu-uh. It’s one of many unexceptional 12-inch singles that Atlantic cranked out, like all my Atlantic/Cotillion CHIC 12-inchers. It’s at 33 1/3, not 45 rpm, on slim vinyl and the grooves aren’t as spaced out as they could be; maybe that’s why the bass isn’t as enveloping as it deserves to be. The mysterious increased sonic depth of (most) vinyl emphasizes the painfully fake synth brass sound. The record masterer is uncredited. It’s good, but it should be eargasmic. Maybe the album version is better, maybe I need a better turntable. There are rumors of a European 45rpm 12-inch, but no photographic evidence. Oh well.

The girl’s “uh-huh”… “yessss” at 2:28 – 2:43 are the sexiest affirmations in popular music! On the 12-inch, you can hear her giggling/gasping/crying during the intro. Audrey Wheeler, Dolette McDonald, & Michelle Cobbs are credited on backing vocals for the album, but B.J. Nelson has a credit for solo vocals on this track, I’d love to think it’s her. She’s the hugely talented singer on Scritti Politti’s equally great 80s albums (e.g. A Little Knowledge from Cupid & Psyche 85)!

Posted in music | Leave a comment