<M <Y
Y> M>

: "All these pictures are 360° panoramas projected to look like small planets." All panoramas should undergo such a projection. I have decreed it!

: It looks like the Libre Map Project is paying off! I'm always happy at reductions in the gap between "public domain" and "available online".

Commercial:

"Hi. I'm a human..."
"...and I'm a Cylon."

: I was reading a Steven Levy book (Insanely Great) about the development of the Macintosh, and after a while it went a little off the rails and started talking about things that were not t.d.o.t.M. But one of the things it mentioned was John Sculley's blue-sky idea of a notebook computer called the Knowledge Navigator, with software agents and voice recognition and basically everything you could want for the Navigation of Knowledge. Sculley even had a bunch of long-form commercials made for it.

Well, it didn't take me long to get up and search for those commercials online, where they now live. They're surprisingly similar to Douglas Adams's much more enjoyable Hyperland. In fact, I think Steven Levy was thinking about Hyperland when he described some aspects of Knowledge Navigator.

[Comments] (1) : Good news, everyone! Long-time NYCB favorite The Sugar Beet has a book coming out!

: Dan's 20th Century Abandonware has lots of nice photos and screenshots.

Backup Thanksgiving: When the Thanksgiving leftovers start to run out it's time for Backup Thanksgiving. In recent years I made Backup Thanksgiving because I was learning to cook, or because of the absence of certain foods from the official Thanksgiving table. This year I made it because it's just good food.

This year Backup Thanksgiving was roast chicken, orange-cranberry sauce, dressing, mashed potatoes, mashed sweet potatoes, carrots, green beans, and Parker House rolls. So delicious. But now Sumana's sick of the leftovers and I'm almost done reincorporating them. I made stock from the chicken carcasses and bread pudding from the cranberry sauce. Now I have to start thinking about what to make instead of just pulling stuff out of the fridge.

PS: The first six chapters of the REST book are now done!

We'll Distribute Anything: Sumana was looking at her old Abbot and Costello tape (which, she now realizes, started her interest in sketch comedy) and was curious about what else the ultra-80s distributor GoodTimes Home Video was in the habit of distributing. A little of everything, it turns out. My fave has got to be: "Form... Focus... Fitness, the Marky Mark Workout".

: mah gets spam from Phillip K. Dick stories

QOTD: Sumana: "That's a good explanation. Of something else."

[Comments] (1) Cool, Refreshing Nilas Ice: Today's EPOD: Nilas Ice

: Rocket mail. Seems not cost effective.

The mail consisted entirely of commemorative postal covers addressed to President of the United States Dwight Eisenhower, other government officials, the Postmasters General of all members of the Universal Postal Union, and so on. They contained letters from United States Postmaster General Arthur E. Summerfield.

Reinforces my belief that the postal service makes most of its money acting as a kind of live-action MMORPG, arbitrarily creating things and declaring them to have value.

[Comments] (1) : Wikipedia is your great trivia game collaborator. Sumana and I played that game for a couple hours last night and it was a lot of fun. It's hard to find something appropriate by visiting random pages, but you can do well by picking a category and going through it.

You do have to pick something in a field the other person knows fairly well, because the fun of the game is in being able to make a valid guess every time a new piece of information is revealed. I did very poorly when Sumana was going through "Hollywood Squares contestants" and "People from Cleveland". OTOH Sumana did well identifying the honorary Globetrotters (not an official Wikipedia category at the moment, though if you put the Futurama characters in "fictional honorary Globetrotters" you'd have one of the best Wikipedia category names ever). Sumana reminds me that I did well with "Fictional robots with emotion" and "Characters played by a member of the opposite sex". At the intersection of those two lies Gypsy.

[Comments] (1) : We went up to New Haven today to hang out with the Minutillos. We had lots of fun! They showed us around, we talked and played games on the Wii, dinner at vegetarian restaurant, etc. Recommended. If you know the Minutillos.

[Comments] (2) : Beautiful Soup Works For You.

WD-50: Rachel Chalmers was in town! We celebrated by dining at WD-50, New York's ritziest restaurant/lubricant. There were good flavor combinations and textures, especially the dessert tasting menu (peanut + guava + shortbread, chocolate + lime + licorice + avocado). And foams! Horseradish foam, orange blossom foam; you name it, it was a foam. The portions were small by my standards and practically broke the bank, but it was worth eating there at least once. Made me want to go to the Chinatown Ice Cream Factory afterwards and just get a big ol' cone, but I didn't.

Giant Tuesday Night of Amazing Inventions and Also There is a Game: Sumana finally got me to go to this comedy show just as it's ending (last show is next giant Tuesday). It was a lot of fun, though I was hoping for more inventions, because they're like the MST3K invention exchanges and because Mike Birch plays a great inventor; the perfect combination of Joel's paranoid naivete and the Mads' cruelty.

[Comments] (1) : Too bad I found this after I bought presents for everyone, because Etsy pushes a lot of buttons. People make things and sell them online through a decentralized marketplace! It plays into everyone's preconceived notions about what the Internet makes possible!

Sumana says "You're really only indie and fresh if Wikipedia deletes you for not being notable."

[Comments] (2) List of Fun:

[Comments] (1) Speaking in tongues: I was walking around today and passed a guy who was talking in a strange language. But it was too regular to be any human language. But then it wasn't a special language at all; he was just saying the same thing over and over again, in English. "Rolex rolex watch [unintelligible; presumably a price] rolex rolex rolex watch [price] rolex watch rolex".

He spoke to no one in particular. Most likely he was hawking fake or otherwise black-market Rolexen, getting plausible deniability from the crowd and his deadpan delivery. But I like to think of him as one of the new breed of mad monks, who take their mantras from the Heraclitan river of spam that flows through our lives. Strong buy strong buy one to watch about to explode strong buy...

: Crazy things: CryoPID turns a running process into a disk file.

[Comments] (2) Hacker Duel: Ward Christensen versus Ward Cunningham.

: Hung out with Evan today. I told him about Kevin's record collection of punk bands with great names like "Live Skull". As we spoke the radio in the restaurant was playing aggravating commercials with creepy lines like "add a little bit of sour cream, and watch 'em smile!" and "it follows your every move!" It was at this point we decided that all brand names should be replaced with "Live Skull". It works beautifully.

[Comments] (2) : Interesting fact from the Whole Earth Software Catalog. You've heard of special-purpose word processing machines, but there was once a special-purpose spreadsheet machine: the WorkSlate. It didn't do too well.

Clipping Service: I'm quoted in an article on stock spam in the Fredericksburg Free Lance-Star.

Weird Search Requests:

what do the different parts of the christmas cracker mean?

What does a paperclip mean?

Here's one answer, though I can't determine the relative positions of tongue and cheek:

The Christmas cracker was invented, purely by chance, by an English baker called Tom Smith. He took the simple principal of the wrapped sweet or 'bon bon' and added first a love motto then, after much experimenting, a strip of paper impregnated with a compound which would 'crack' when opened. Over the course of time he dropped the sweet, lengthened the wrapper and introduced small novelty gifts.

I guess 2% inspiration and 98% perspiration moves an invention into "purely by chance" territory?

[Comments] (2) Goodbye Google SOAP Search Service: You may have heard that Google has deprecated their SOAP-based search service. This comes after Nelson Minar, who worked on that API back when he was at Google, says he'd "never choose to use SOAP and WSDL again."

Much as I dislike the way SOAP is used these days, I'm not inclined to gloat, because the deprecation just means more work for me. I used the SOAP search service as an example in all three of my books. Now I've got to find another free, public, non-obscure SOAP service to use as an example (ideas?).

The official Google narrative is that the SOAP-based web service has been replaced by something called the "Google AJAX Search API". If you take this narrative at face value it means that Google has taken down their web service and put up an AJAX library in its place. What's AJAX? Who knows, it's AJAX. Here's a typical weblog entry on the topic.

Another victory for REST over WS-*? Nope -- Google doesn't have a REST API to replace it. Instead, something much more important is happening, and it could be that REST, WS-*, and the whole of open web data and mash-ups all end up on the losing side.

It's probably my recent proximity to Sam that's doing it, but I'm noticing a tendency in myself to draw fine distinctions. There's a sense in which this narrative is right and a sense in which it's not. I'm going to pick apart the narrative and show what exactly is disturbing about the Google AJAX Search API.

It's not true to say that "Google doesn't have a REST API to replace it." In fact, Google has two REST APIs, and one of them predates even the SOAP API. You've probably used this old API: its primary endpoint is http://www.google.com/.

Yes, the Google website is in fact a very RESTful web service. The downside of this web service is that it's a little bit difficult to use automatically, as opposed to through a web browser. It serves data in a human-oriented format (HTML), and you have to screen-scrape it into a data structure if you want to do anything for it.

There are libraries for doing this, but the other problem is Google doesn't want you to do it. It violates Google's Terms of Service ("No Automated Querying"). Lots of inconsiderate people write scripts that hammer Google's REST API day and night. Google tries to prevent this by sniffing out anything that might not be a web browser and preventing it from accessing the API. (To see this, set your browser's User-Agent to "libwww-perl" and try to use Google.)

But it can't be denied that people outside of Google have a powerful hankering for Google's dataset, so eventually someone (Nelson Minar, it seems) came up with a second web service API that was designed just for use by automated clients. The catch was that you had to sign up for a unique key to use it, and that key would only work for you 500 (later 1000) times a day.

In point of fact Nelson chose a SOAP/WSDL architecture for this web service. But there was no need to use any different architecture at all. Here's a possible different way of implementing the constraints above:

When you make an HTTP request to google.com, we try to figure out whether you're a web browser or an automated client. Ordinarily, if you're an automated client, we shut you out. But here's the deal. Now you can sign up for an "automated client key". When you make an HTTP request to google.com, stick your key into the Authorization header. Not only will we not shut you out, we'll try to make things easy for you. Instead of a human-oriented HTML document, we'll send you the appropriate data in an easy-to-parse XML format. But, we'll only do this for you 500 (later 1000) times a day. Then it's back to shutting you out.

This technique has a number of subtle benefits which I could bore you with for quite a while. But its obvious benefit is that it's got the exact same "API" as the Google website, which everyone knows how to use.

Anyway, instead of going down a route like this (which would, I think, have changed the history of web services quite a bit), Google went down the SOAP/WSDL route. Now they're deprecating the SOAP service in favor of some mysterious "AJAX API". This brings me to the second of Google's REST APIs.

There is no magical thing called an "Ajax request". An Ajax client makes normal HTTP requests, and processes the results automatically, just like a web service client. An Ajax client is a web service client.

What HTTP requests is the Google Ajax client making? I poked around a little bit and it looks like it mainly makes GET requests to URIs that look something like http://www.google.com/uds/GwebSearch?callback=GwebSearch.Raw&context=0&lstkp=0&v=1.0&key=xxxxxxxxxx&term=web+services. That's not exactly http://www.google.com/search?q=web+services, but it's not too far off either.

The Google AJAX API consists of a browser-side Javascript library and a server-side web service. The one acts as a client for the other. From what little I've seen of the web service I'd consider it quite RESTful. In fact, it's architecturally very similar to Yahoo!'s RESTful search API. They both use the same (IMO, fairly unsafe) trick to get a web browser to execute dynamically-generated Javascript code from another domain.

The main difference is that Yahoo's search API can also be made to send data (in JSON or ad-hoc XML format) instead of executable Javascript. That makes it possible for the service to be consumed by automated clients, not just by web browsers running client-side Ajax programs.

Let me just see if I can do something similar with the Google web service. The Javascript it serves is extremely close to also being a JSON document; I should be able to hack it a little and parse it as JSON.

Here's some Ruby code that gives you kind of a command-line Google search like people used to write for the old SOAP API. It requires the json gem.

You can skip the code.


#!/usr/bin/ruby
require 'rubygems'
require 'uri'
require 'open-uri'
require 'json'

KEYS = %w{GsearchResultClass unescapedUrl url visibleUrl cacheUrl
          title titleNoFormatting content results adResults
          content1 content2 impressionUrl}

def search(key, term)
  uri = "http://www.google.com/uds/GwebSearch" + 
    "?callback=GwebSearch.Raw&context=0&lstkp=0&v=1.0" + 
    "&key=#{key}&q=#{URI.escape(term)}"
  javascript = open(uri).read
  
  # Hack quotes around the hash keys to make the Javascript string
  # into JSON.
  KEYS.each do |key|
    find = Regexp.compile("\s*#{key}\s*:")
    json.gsub!(find, " \"#{key}\" : ")
  end 

  parsed = JSON.parse(json)
  return parsed["results"], parsed["adResults"]
end

# Command-line interface begins here

(puts "Usage: #{$0} [API key] [search term]"; exit) unless ARGV.size == 2
key, term = ARGV

results, ads = search(key, term)
puts "#{results.size} results for '#{term}':"
results.each do |result|
  puts result['titleNoFormatting']
  puts " #{result['url']}"
  puts " #{result['content'][0..70]}" unless result['content'].empty?
  puts 
end

unless ads.empty?
  puts "Look at some ads while you're at it:"
  puts '------------------------------------'
  ads.each do |ad|
      puts ad['title']
      puts ad['visibleUrl']
      puts " #{ad['content1']}"
      puts " #{ad['content2']}"
      puts
  end
end

Now, in old episodes of MacGyver, whenever MacGyver built a bomb out of baking soda and masking tape, the writers would change some crucial detail (like change the masking tape to Scotch tape) so that if kids copied MacGyver they wouldn't blow up the house. I've done something similar here. I've removed a crucial line of code from that program, so that people don't just go copying it and running it all over the place.

Why did I do that? Because when it works, that program violates the Google AJAX Search API Terms of Service. "The API is limited to allowing You to host and display Google Search Results on your site." I can use the old SOAP API to write a command-line search tool, but I can't use the new, RESTful API in that kind of application. My users can only access the RESTful API through a specific library (Google's Javascript library), running in a specific way (in their web browsers), for a specific purpose (displaying search results).

Wait a minute... running only in a web browser? Terms of Service? Bootleg scripts that hack the output into something a parser can understand? This REST web service is made available on exactly the same programming-unfriendly terms as the Google website "REST web service"!

Instead of screen-scraping a web page, I'm now screen-scraping a web service. I'm reverse-engineering undocumented URI formats, just like I do when I screen-scrape. So far, there's nothing on Google's end that sniffs my user-agent to make sure the web service only runs in a browser, but you can bet there will be as soon as that becomes a problem for Google.

The "blow to web services" narrative is incorrect. Google did in fact deprecate their SOAP API and expose a RESTful API. A win for REST!

Though incorrect, the "blow to web services" narrative is also correct. Google deprecated their SOAP API, exposed a RESTful API, and then erected a bunch of technological and legalese barriers around any attempt to actually use the RESTful API. You're only allowed to use it through one library in one language in one environment for one purpose. A loss for everyone!

On the level of technological choices, this move is a big improvement. They've gone from SOAP, which has a lot of overhead, to plain old HTTP, which has strictly less. Gone from an RPC style, which doesn't play well with the web, to a RESTful style, which does. This makes an enormous amount of technological sense. From its first day on the web, Google has exposed its dataset through a RESTful interface that gets orders of magnitude more traffic than any "web service" it might expose. In a sense, all they're doing now is unifying the architectures.

When it comes to getting information into the hands of people who can use it, Google has taken a big step backwards. The SOAP interface was serious overkill, but what you did with it was your business (though you could only do it 1000 times a day). The new RESTful interface is a technical improvement, but it's encumbered with restrictions that make it a museum piece. Unless you're writing an Ajax application using Google's library, its true value can only be obtained illicitly. And that's the other reason why I'm not inclined to gloat.

[Comments] (4) Entities: I got a Christmas present shipped to me. The FedEx sticker on the front says it's from "Fran&amp;#146;s Chocolates".

[Comments] (1) : I talked to Riana today and she told me about Knuth 3:16. Hilarity!

rest-open-uri: My open-uri hack isn't going into Ruby anytime soon, so rather than have to devote a bunch of space in the book to hacking or reimplementing, I packaged it as the rest-open-uri gem. You can use it wherever you'd use open-uri, and it supports entity-bodies and all the HTTP methods. It's the only Ruby HTTP client library I need! And I think it should work even in programs that also use open-uri (so long as you require it after open-uri).

QOTD:

Leonard: I'll do it, but I'll be using scare quotes the whole time! "Scare quotes"!
Sumana: Yes, Leonard, scare quotes. That's what keeps you from being bourgeois.

I laughed and laughed.

Giant Tuesday: Last night we went to the last episode of Giant Tuesday and it was great. Highlights included the sentient George Foreman grill that talked like George Foreman, and the best-executed fart joke I've ever seen. Too bad it's now GONE FOREVER.

[Comments] (1) Artificial Beach: I'd known for a long time about the artificial beach theme park in Japan. I've been to an artificial beach myself (though not such an opulent one), so I didn't think there was anything especially odd about it until immediately after waking up this morning, when I realized that Japan is an island chain and they've already got more beach than they know what to do with. It's like having a Vegas casino where the theme is "desert and gila monsters".

Indeed, according to this random weblog entry the artificial beach is right next to a real beach. I guess that makes sense logistically but it takes us into the territory of a Vegas casino whose theme is "ripping off the other casinos on the Strip". Which undermines my point, because that would be the most awesome casino ever.

: Not much to report. Nandini came over for Christmas, which is fun. We watched Brick again.

I finally got my sample web service written for the book, though it's still missing one feature. It's about 16K of code, which I don't know how many pages that translates into.

[Comments] (1) : "Sigmund Freud, scribbling in the pages of a Swiss hotel register, appears to have left the answer to a question that has titillated scholars for much of the last century: Did he have an affair with his wife’s younger sister, Minna Bernays?"

Clearly this is why my mother always signed registries as "Sigmund Freud".

: We had present-opening fun and then went to Ben's house for a long party. I got a blowtorch! (Part of a creme brulee kit from Susanna and John) Happy Christmas to all, and to all a good night.

Awesome APOD: "The gegenschein is sunlight back-scattered off small interplanetary dust particles."

DUN DUN: Law and Order: Sesame Street

Egg Thing: It's difficult to search for casserole recipes on the web because most of the recipes are full of mushroom soup. So I went with my gut and did the following egg bake thing for dinner last night. I guess I could have checked the Julia Child, but my gut was closer.

Preheat oven to 375 degrees. Saute the leeks. Beat together milk, eggs, and cheese. Stack the zucchini in a 9-inch baking pan, mixed with the leeks. Pour the milk mixture on top and put bread crumbs on top of that. Bake for 50 minutes. It turned out great!

[Comments] (1) : For a while I was trying out AdSense ads on this site. They never brought in more than a paltry sum, and I stopped doing them yesterday, rather abruptly, when Google cancelled my AdSense account.

The web is full of stories of these arbitrary cutoffs, which I can now confirm in their relevant details. You get an email saying "we've cancelled your account based on classified information" and that's it. You get a pro forma chance to defend yourself, but it's not much good because you don't know what happened. It also doesn't make much difference, because the response always comes back: "we re-ran our classified algorithm on our classified dataset and it gave the same result as before, buh-bye".

Fortunately, I have a friend in the online ad business. Ryan's Project Wonderful mostly serves ads for webcomics right now, which is fine by me, as it means fewer "KEYWORD? Buy it now at creepysite.com" ads.

: Some minor bugfixes to wadl.rb. Still no official release, still not much in the way of syntactic sugar.

Some books about computer history: I mentioned a while back that I was reading Steven Levy's book about the development of the Macintosh. I guess that triggered something in my to clear out my stock of books about computer history, because then I read The Dream Machine, a book about ARPA by M. Mitchel Waldrop (almost as ponderous a name as J. C. R. Licklider), and Andrew Hodges's famous Alan Turing: The Enigma. And there was also the matter of the intermittently fascinating Whole Earth Software Catalog.

I bought the Waldrop book as a biography of Licklider (actually I bought it because of the BRIGHT YELLOW COVER that blocked out all other books in the bookstore), but it really strains to fill a whole book with his life and make it interesting. Fortunately the book's method of straining is to bring in all the other characters of American computer science from the 1940s through the 1970s, with a theme of Licklider as the networker between them, and that was very entertaining. Though I admit part of the entertainingness was the celebrity-spotting equivalent of those old Looney Tunes cartoons that feature charicatures of all the Warner Bros. contract stars of the 1940s. Oh look, it's {Peter Lorre,John McCarthy}.

The most interesting new information I found in the book is the history of ARPA, specifically of the Information Processing Techniques Office (Licklider was IPTO head twice). The major theme is the Vietnam-era mission anticreep that slowly pushed ARPA from advanced research projects, to the more circumscribed realm of advanced research projects that we can use on the battlefield within two years without fundamentally changing anything (somewhat parallel to the problems Xerox PARC had). Meanwhile Licklider's trying to scrape together a couple million to fund his crazy "Intergalactic Network" project, and his office is sticking boilerplate defense justifications onto incoming grant proposals so they can get ARPA money.

"All that language about military rationale wasn't in the Stanford version of the proposals," explains Ed Geigenbaum: it was slapped on at the very end by the ARPA funding officers back in Washington. "The only people who ever saw it were the students who would later dig it up under the Freedom of Information Act. Then they'd bring it on campus and say, 'See, McCarthy is working on such and such.' McCarthy would say, 'What do you mean? I never heard of that!'"

But the book also ties together things that I learned about in college with no that they were connected. For instance, ALOHA and Ethernet use the same collision resolution mechanism not because it's an obvious fact about the universe, but because Bob Metcalfe read a paper slamming ALOHA and decided to prove it could work. And multitasking came directly from time-sharing, as cheaper computers made time-sharing less of a selling point.

Fun fact: the book made it sound like ARPA was originally planned to encompass space research, and that NASA was created as a separate agency after a bureaucratic turf war. So the space program could have been part of ARPA.

Understanding the book requires no technical knowledge, but I don't think I would have enjoyed it as much if I didn't already have all these pieces in my mind ready to be connected. The book made me more interested in another item on my wishlist, John Markoff's What the Dormouse Said.

This entry got kind of long, so I'll talk about the Turing book in another entry.

[Comments] (1) The Enigma: This was a weird book to read because I'd already internalized much of it. It's the standard narrative of Turing's life I grew up with, and if you read it for the details you can actually see Neal Stephenson coming up with the plot for Cryptonomicon. So unlike the Licklider book I don't have a whole lot to say about this one, except that it was definitely worth reading.

One thing that jumped out of me was, reading the WWII sections, how much of the codebreaking relied not on abstract mathematical ingenuity but in ingenuity applied to exploiting the adversary's mistakes. For instance, the Germans often used a less secure version of Enigma for unclassified things like the weather report, not realizing that if Bletchley cracked the weather report they'd be 80% done cracking the secure cipher for that day. Not surprising that this got largely left out of Cryptonomicon, because fiction is usually more interesting when the adversaries are competent.

Nethack Interlude: Despite my earlier promises to myself that I'd stop playing Nethack after ascending, I got hooked earlier this month on the alt.org Nethack server, which yields up many interesting bones files. I started trying to ascend a tourist (to my mind the lamest Nethack class), and this morning I succeeded, achieving #1490 on the alt.org high score board. Since the various conducts have never really appealed to me, hopefully this is the end of Nethack addiction.

[Comments] (1) The Anthropic Principle of Open Systems: There's a common theme in the Licklider book and the Tim Berners-Lee book, the theme of contigency. At every stage in the development of the Internet, there was an overwhelming chance that the project would fail. There were interests vested against openness, and then strong competitors that didn't share the philosophy of openness. But the open solution won. The same thing happened, in miniature, for the Web on top of the Internet. I mentioned this in passing last year.

This creates a problem analagous to the problem of the anthropic principle in physics. It's not a perfect analogy because the development of the Internet took place within space-time, but I think you can see it. Why did all these unlikely contigencies happen?

I can think of three possible responses. The first is that the contigencies weren't unlikely at all. There is some hidden force of society that makes sure open solutions tend to win. This is the knee-jerk free software response. I like it okay, but I've grown more dissatisfied with it as I've read these books, because there are lots of other situations in the world where the open solutions lost big time. When does the hidden force work?

The second response is that we gravitate to whatever open solutions we can find to solve our problems. The general public can't use something unless it's open. If things had turned out differently we might be speculating on how unlikely it was that we would develop a tradition of collaborative biology or open letter-writing.

The third is to deny that there's anything special about the contigencies. We don't talk about things that don't happen, so if these contigencies hadn't happened, we wouldn't be talking about them. Saying that the Internet and the Web worked out the way they did is just a tautology. This response has its points when applied to the physics anthropic principle, but I find it unsatisfying in this case, because it looks like the same sorts of contigencies turned out the same way twice.

Anyway, just some high-concept rambling to see out the year.

[Comments] (4) : Today I walked around Manhattan with Evan. I mentioned my desire to own a nice topcoat and Evan took me to basically every clothing store in crowded SoHo. Finally we found a topcoat in the basement of J. Crew. I guess they haven't been selling well lately what with the oppressive non-cold.

I sustained an actual enthusiasm for clothes shopping for a record two stores, but by this time I was willing to settle for just trying on a topcoat and seeing how I looked in it. This resolve was strengthened by the $200 price tag on the topcoat.

The trying-on was a disaster. I'd thought I could just wear a topcoat on top of whatever and it would be warm and make me look classy, but I looked like a slob who'd stolen a topcoat from somebody. I was like the woman in Cryptonomicon who starts wearing stockings and then discovers that stockings dictate your entire wardrobe and lifestyle.

The topcoat works on Evan because Evan already has a compatible wardrobe, but it's totally out of my demographic. This was a fairly distressing realization. The coat would not transfer its class to me merely through a mercenary monetary transaction. This is the difference between class and mere wealth, and I don't even have the consolation that I became extremely wealthy and then discovered it.

[Comments] (1) : Apart from my crushing last-minute defeat by a topcoat, 2006 was an okay year for me. I got married and got used to New York. I got a book published, and sold a book proposal and a SF story. I saw a lot of friends and had a lot of ideas and played a lot of games.

I'm glad I got to spend a lot of time with my mother before she died, and I'm glad we got to see Susanna and John for Thanksgiving. My main 2006 regret is along the same lines: I wish we could have all been together for Christmas; I think we should start making plans for next year and get it set up as early as possible. Someone said that with Mom dead we would need to put in work to see each other, because we'd no longer have a natural gathering place, and I'm starting to see the truth in that.

My current plans for next year are writing-related: the REST book and lots more SF. Let me know if you want to read the noir space opera I'm trying to sell.

<M <Y
Y> M>

[Main]

Unless otherwise noted, all content licensed by Leonard Richardson
under a Creative Commons License.