<M <Y
Y> M>

[Comments] (1) : I miss pseudoephedrine.

Copy Me Elmo: First there was Bert hanging out with Osama bin Laden. Then, many years later, in fact a couple days ago, Sumana went to the Bangalore equivalent of a county fair. There were a number of public-service-announcement posters including an anti-suicide poster listing common causes of suicide and telling you to look for the warning signs. Fair enough, except the background image of the poster was this Internet-gag picture of Elmo offing himself.

I can see the thought process--it's certainly eye-catching, and if it weren't Elmo it might even be disturbing--but I don't know how they got the Elmo picture in particular. An image search for "doll suicide" pulls up arty pictures and goth porn. Someone had to think to search for "elmo suicide", and at that point you're really in on the joke.

[Comments] (2) Depressed Dog On Spaceship: One of the constrained writing exercises we did at Viable Paradise was to write a story given a character, setting, and problem provided by three other people. A recent VP graduate has created the Plot-O-Matic which automates the process.

I don't remember what my character, setting, and problem were, which is odd because I'm pretty sure I wrote a story using them. Maybe not!

[Comments] (3) Cookie Do-Over: I don't know if this is something that should be added to the web browser or done as an extension, or if there's some way to do it already, but it would help me out a lot. I have Firefox configured to, every time it's sent a cookie from a new domain, ask me what to do with the cookie. Most of the time it's deny, deny, deny. But sometimes it turns out the right answer was "allow for session", and now I can't use the site.

To fix this, I have to go into the cookie 'Exceptions' and figure out which decision to undo. Usually it's the decision for 'domain.com', the site I'm trying to use, so the process is annoying but not terribly difficult. Sometimes I need to check for both 'domain.com' and 'www.domain.com'. That's not a big deal either.

But sometimes the magic cookie domain turns out to be "r.a.ndom.subdomain.domain.com", a subdomain I've never heard of that's just used for authentication. r.a.ndom.subdomain.domain.com doesn't show up in the 'Exceptions' list near domain.com, and so I don't know it exists. It was shown to me once but I was hitting deny, deny, deny and didn't see it. The only way to see it again is to run Live HTTP Headers, reload the page, and see which of the 50 HTTP responses try to set a cookie.

What I'd really like to do is call a do-over. Reload the page and have Firefox ask me all the cookie questions again. If I miscalculate which cookies are actually necessary to use the site, I just call another do-over.

This was much easier to explain once I realized that the right word was "do-over". Thanks, schoolyard fecklessness.

[Comments] (5) : I've always liked the phrase "could care less" even though it's one of those maddening phrases that means the same as its opposite, "couldn't care less". I like the self-referential connotations that make it mean the same as its opposite: "I'm too apathetic about this to even put in the effort to not care about it!"

Game Roundup: DS Homebrew Edition: I'm kind of disappointed with the state of DS homebrew games. Admittedly the DS is hostile territory for homebrew, which limits the audience somewhat. It reminds me of the state of Linux gaming in the early 2000s which led me to start doing the Games Roundups in the first place, and the underlying reason in both cases is probably the small audience. In fact, some of the same games from back then are showing up on the DS, like DSBill, which probably works better on the DS than XBill does with the mouse.

Anyway, I went through the games section of ndshb.com, or another site much like it, dumped a bunch of games on my mini-SD card, and played them (or tried to) over the course of several plane flights. I'm pretty sure it wasn't ndshb.com, actually, which means I could do another one of these. Anyway, here are the games I was 1) able to get working and 2) entertained by. My list includes helpful hints like WHAT THE POINT OF THE GAME IS, a tidbit sometimes omitted from official game documentation.

A second round of reviews coming eventually.

Leonard's Web Service Maturity Heuristic: In my QCon talk (video coming eventually) I told three stories, This American Life-style. This weblog entry summarizes the third story.

By now it's a cliche to observe that allegedly "RESTful" web services like Amazon's SimpleDB and the Flickr web service and the del.icio.us web service aren't really RESTful. But there's something about them that makes their creators distinguish them from SOAP-based web services (by calling them "RESTful"), and there's something about those services that users love despite the possibility of, eg. deleting data by accident.

Rather than say one service is more or less "RESTful" than another, attempting to quantify an easy-to-misuse term that wasn't even intended to be used in relation to web services, I've found it useful to judge web services based on how many of the web technologies they adopt. Think of the World Wide Web as having a technology stack that looks like this:

Hypermedia (ie. HTML)
HTTP
URI

In every case I know about, people build web services from the bottom of the stack. They pick some point on the stack and take the technologies below that point seriously. The technologies above that point are not considered important. They're ignored, or used to the minimum extent you can get away with and still technically be a "web service".

XML-RPC and SOAP services are at level zero. They don't take any of the web technologies seriously. They use one URI, one HTTP method, none of the interesting features of HTTP, and they have no notion of hypermedia.

URI

The web services I mentioned at the beginning of this entry are at level one. They take URIs seriously and assign a URI to every aspect of the system. But they only use one HTTP method (GET) and don't use any of the interesting features of HTTP. Nonetheless, people love these web services, because people love URIs.

HTTP
URI

Amazon S3 is at level two. It has problems (or so I've heard) with its use of HTTP, but it takes HTTP seriously. It uses URIs to designate objects in the system that can be operated on with different HTTP methods. It takes advantage of HTTP's features like conditional requests. But there's no hypermedia. All the information about how to manipulate the resources, and how to detect the connections between them, is in English documentation that you have to read when writing your custom S3 client.

Hypermedia
HTTP
URI

Most of my current thinking is hypermedia-related. Useful here is the formal definition of hypermedia, from 4.1.3 of the Fielding dissertation: "Hypermedia is defined by the presence of application control information embedded within, or as a layer above, the presentation of information."

Services like the Web, AtomPub, the Netflix web service, and the Launchpad web service are at level three. They serve documents with embedded "application control information": links and forms that give a more or less generic client hints about how to manipulate this particular web service.

There are degrees of quality within these levels. For instance, big parts of the Web use URIs to name operations, rather than the objects those operations can act on. I think that's bad design, but that's a problem on level one. It doesn't negate the value of HTTP or hypermedia. Similarly, when we have the well-worn argument about which HTTP methods a web service should expose, we're having an argument on level two, not an argument about who's more RESTful. In fact, almost all the currently raging arguments are level one or level two arguments, which is unfortunate as there are some really great flamewars to be had on level three.

I don't intend to defend this classification technique in detail. It's a set of heuristics. In theory you could take URI and HTML seriously but not HTTP; or HTML and HTTP but not URI. But the heuristics are useful in that they 1) encapsulate a fact about how people tend to design web services, 2) make it easy to classify an argument or problem, 3) let you make a snap judgement of how much Web knowledge went into a web service, 4) make it easy to think about RESTfulness (an abstract meta-architectural concept) in terms of specific technologies you should know about already, technologies that are probably the reason you care about REST in the first place.

: At writing group it was proposed that someone remake Dogma as a Dogme 95 film.

[Comments] (4) Beautiful Soup Future: I've got a chunk of time off at the end of the year, not having used it earlier in the year. Among my other projects I'm going to redo Beautiful Soup. This entry is an early spelling out of my rationales and my plans.

Earlier this year I quietly retired Rubyful Soup because I think _why's hpricot does a better job of being a Rubyish screen-scraping parser than RS can be. But nothing similar has happened in Python, mainly because BS is the market leader. I want to keep that going, but I also want to take advantage of the work that's gone on in this field since 2004.

So, what are the useful features of Beautiful Soup?

  1. It can build an object model out of bad HTML.
  2. It can build an object model out of bad XML, if you tell it the rules of your XML vocabulary. (This is just the general case of #1.)
  3. It can convert almost any encoding into Unicode, usually in the absence of an explicit encoding marker or the presence of an incorrect one.
  4. It exposes a useful API. It's easy to learn, more Pythonic than CSS selectors or XPath, and it includes most common ways of traversing the tree.

Of these, the only one I really care about is #4. If I could rid myself of the need to handle all the edge cases in #1-#3, edge cases that in many cases have outstripped my ability to solve them with my current tools and sanity, I'd be happy.

Fortunately, there's html5lib, which is supposed to be as good at parsing HTML as a web browser.

My current plan is to write something that goes on top of html5lib and gives the BS API to whatever DOM you've built. This would take care of #1 and #3. It's not clear to me how you tell html5lib the rules of your XML vocabulary; maybe it only parses valid XML. But BS is relatively rarely used to parse invalid XML, so if I could outsource all the HTML and Unicode crap to html5lib I'd be much more inclined to hack randomly on BS, so I think it's a fair trade.

html5lib already has a "beautifulsoup" tree builder, which creates a tree of Beautiful Soup objects. So in theory I would just need to maintain those objects? I'll find out soon enough.

Best Cheese Name: "Lamb Chopper"

Interestingly enough, made by the same company that makes the best cheese, Humboldt Fog.

If you believe in the superiority of some other cheese or cheese name, you know what to do.

[Comments] (2) Roy Richardson's Computer Buttons: I spent the evening taking pictures of and writing descriptions of the buttons my father got at computer trade shows in the early 1980s. Check them out for a glimpse of a time when you could advertise mainframe software with the slogan "Do It With Frequency." I've mentioned these before, but now they're available for your perusal, with explanations of what software packages they're advertising.

[Comments] (1) I Know What You Crave: More pictures! These are post-QCon pictures from San Francisco, mostly covering my afternoon at the Exploratorium with Susan McCarthy. Guest star: Rohit Khare.

By popular demand (from Susanna), I also put up pictures from Thanksgiving, including a bonus video where my niece Maggie destroys a truck.

: Sumana bought me an issue of G-Fan magazine, the periodical devoted to Godzilla and Japanese special-effects movies in general. I don't think I would subscribe to it, but one issue is fun to go through. For instance, I discovered "A Dinosaur Paleontologist's View of Godzilla." Which, in turn, told me about Mononykus, a dinosaur with one finger on each hand. Creepy!

[Comments] (2) : I can conceivably see how you might think I'd be interested in your stupid "SOA" emails, but if you put any thought at all into putting me on your mailing list it might have also occurred to you that I can't read Chinese.

How do I know your emails are stupid if I can't read Chinese? Well, 1) you're spamming me with them, and 2) they say "SOA." Yeah, I'm a regular Chinese Room.

: There are four Beautiful Soup-related tasks that ought to happen in the near future.

  1. Convert the codebase to Python 3; or rather, convert it to Python 2 that can be automatically converted to Python 3.
  2. Aaron DeVore has a number of interesting additions to the API. They need to be added.
  3. Separate the tree-walker from the tree-builder. Start getting out of the business of writing tree-builders.
  4. Simplify the API.

Aaron is doing the integration work. I'm doing the conversion. Once that's done we'll have a big soup that people can use into the distant future if they don't like what I do in steps 3 and 4, or if I give the whole thing up, which is a distant possibility.

Step 3 is still a mystery. Apparently html5lib is even slower than SGMLParser, so generic is probably the way to go. Maybe something with bindings to lxml and html5lib (this is why you may not like what I do in this step and may want to stick with the all-Python, not-as-slow-as-html5lib version).

What's going to happen in step 4? Ian Bicking wrote an appreciation of lxml that ties in with my Beautiful Soup angst. Mostly to do with the tree-builder, but also rightly bashing BS's primitive CSS matcher. Implementing CSS selectors or XPath is another business I don't want to be in, but I wouldn't mind bundling with someone else's strategies for walking the tree according to CSS selectors or XPath.

I'm not terribly motivated to make these changes because I don't really use Beautiful Soup anymore. Partly because I don't do as much non-work programming as I did pre-Canonical, and partly because the sites I used to screen-scrape back in 2004 have wised up and developed web services or syndication feeds. Redoing the library doesn't feel like a fun use of my end-of-year vacation; I'd rather write stories.

But here's where I start when I think about the changes. For me, the core users of Beautiful Soup are the total newbies, people for whom BS is their first or second Python library. I've never made a secret of the fact that BS is slower than other parsers, and although coupling it with a C tree-builder would speed it up a lot, I'm totally serious when I say the point of BS is to save programmer time at the expense of processor time. If you need speed, you've got options. My overriding concern is people who've just realized they can get the information they need off a web site by writing a computer program.

Unfortunately, about 30% of those people have some specific need that goes beyond the simple API you see in Beautiful Soup 1.x, and over time those additional ways of searching got stuck into the API, and you get the Microsoft Word problem. The complexity of the API is now itself costing programmer time. It needs to stop. (But after I get Aaron's additions in so that I'll have more raw data to do my redesign with.)

So what I'd like to try is a stripped-down Beautiful Soup based on list comprehensions, with a little bit of syntactic sugar for total newbies. Matching is done by calling a factory function that returns the equivalent of a SoupStrainer. The factory functions contain most of what's currently in searchTag. Additional factory functions can implement CSS selectors or XPath, except at this point you should be able to just use a parser that has its own support for CSS selectors or XPath.

Don't even get me started on what I need to do to NewsBruiser eventually...

: I'm someone who's always felt that the mediocre Coen Bros. movie "Burn After Reading" should have been called "Burn Before Reading." At least that way there'd be a joke in it. But I'm pleased to discover that the "burn before reading" joke shows up in Monday Begins on Saturday by another set of brothers, the brothers Strugatski.

I looked over the bare junk-laden room with shards of strange models and fragments of unprofessional drawings, paused by the door to poke my shoe at the folder bearing the smudged legend Absolutely Secret. Burn Before Reading, and went on.

It's an excellent book full of the deadpan fantastic I love so much. I got it from Susan McCarthy, who keeps books in her attic and puts corks from wine bottles on the deadly nails sticking through the attic roof. "One day anthropologists will come up here," she says, "and they'll say 'I guess she came up here and got drunk and read.'"

[Comments] (1) : When we were in college Kris introduced me to the work of Jack Horkheimer, the Star Hustler. This was before online video, so I didn't see any of Horkheimer's actual videos. But I learned enough that we could come up with a skit in which Jack Horkheimer has to repeatedly rename his "Star Hustler" show because an interlocutor insists on interpreting the name of his show as being prostitution-related.

"Star Hustler" did in fact change to "Star Gazer" in the late 90s, for pretty much the reason given in the skit. Anyway, Jack Horkheimer was a stock character for us along the lines of this 1997 Onion story, but the other day I saw on clickolinko "The Many Phases of Jack Horkheimer", a riveting story from the 1982 Miami Herald that will, if not shaming you into stopping making dumb jokes about Jack Horkheimer, at least give his character incredible new depth.

I can't improve on what Kris says about that article: "[T]hat is urban legend stuff. I thought it was a Chuck Norris-type gag at first. The dude kept looking up longer than I would have."

[Comments] (2) Florilegium:

A single proposition isn't a theory, it's a slogan; and what some philosophers do isn't theorizing, it's slogan-honing. What is this labor for? What confusion would be dissipated, what advances in outlook would be created, by success in this endeavor? Do you really need something to print on your T-shirt?

--Daniel Dennett, Consciousness Explained

[Comments] (5) Where Is It? A Continuing Series: We watched a Sesame Street video in which the Sesame Street subway station was visible. The lines accessible from that station are the 1, 2, A, and B. I say this puts Sesame Street in the West Village around 4th street. The Muppet wiki has more.

[Comments] (4) How Strange Is The Loop?: I borrowed I Am a Strange Loop from Evan and read it after reading Consciousness Explained. As you might expect I agree with most of what Hofstadter says in the book, but there's one big thing I don't think he ever makes an argument for.

Nonlocalized consciousness (not the real term) makes it legitimate to think of other people as existing inside you to the extent that you've internalized their mental processes. Which is always really not very much at all, but better than nothing. One implication of this is that fictional characters can exist inside you in the same sense. That seems reasonable to me. Fictional thoughts are thoughts.

But Hofstadter says that you can (and do) actually internalize another person's consciousness on a coarse-grained level: the "strange loop" that drives that person's consciousness. And so not only do (your copies of) other people and fictional characters exist inside you, they are actually conscious inside you, because their strange loop is running on your hardware.

Stipulating the strange loop thing, I guess fictional characters could be called conscious if you simulated their thought processes in enough detail, but is that really what's happening? Do you ever know enough about someone else's thought process to internalize their consciousness, a thing so inaccessible that the person themself can't describe it? When I make up a character I have, in theory, complete control over their mental states, but I don't create a strange loop for them. I use my own, preexisting consciousness to simulate them. That's different. I'm pretending to be them, with greater or lesser success.

I'm not as old as Hofstadter, so I don't have as much practice, but I've known Sumana for about eight years and I do have an internalized Sumana that acts kind of like the real Sumana. But I wouldn't say I've internalized a copy of Sumana's consciousness, her sleep number strange loop. I'm using my own strange loop as the simulation engine.

So my intuition is that my Sumana-symbol, my symbols for dead people I used to know, and my symbols for fictional characters are the same kind of symbols as I keep for other complex entities like the World Wide Web. Not the kind that forms an "I". I can be surprised by something one of my fictional characters does, but it's the same kind of surprise as when I come up with an idea some other way. I don't see where Hofstadter argues that our representations of other people have this unusual property, but a lot of the book assumes it.

I'm explicitly not saying that mental simulations of consciousness would not be conscious. They would be. And I could believe that someone with eg. multiple personality disorder had multiple strange loops in their mind. I just don't think that's what happens when we think about a dead person we used to know.

Everybody Loves Dirt Candy: A few days ago we had dinner at Dirt Candy, a tiny fancy restaurant in the East Village. It was great! And instead of being a horrible Flash monstrosity, the restaurant's website is a weblog! Where owner Amanda Cohen talks about the dishes she invented and links to BoogaBooga.

We had a great time with the spinach soup, mushroom cube, crispy tofu, popcorn pudding, etc. And it'll probably get even better once they have gas. I guess waiting for the gas to get turned on is the restaurant equivalent of waiting for Internet access after you move.

Beautiful Soup Progress: I spent some time today trying to get BS in shape to run under Python 3. Here's the branch I'm working on.

sgmllib doesn't exist in Python 3, so I switched to HTMLParser, which has gotten a lot better at parsing bad HTML. With my hacks in place, only 3 of my unit tests pass under sgmllib but fail under HTMLParser. That's acceptable given that my switching to HTMLParser creates part of the framework I'll use so that you can write a plugin for lxml, html5lib (not as slow as I'd thought), or some other parser. Eventually I'll get rid of the HTMLParser plugin, or at least strip it down so that it doesn't know anything about HTML, making my life easier.

What's left is some minor syntax problems and some huge problems dealing with the way strings work in Python 3 as they go in and out of encodings. At this point I need to stop hacking on BS and do some experiments to get a good understanding of the string changes.

Best Of Bookmarks, January-February: You may or may not know that Sumana and I keep a pile o' bookmarks on del.icio.us (actually in my personal notebook and mirrored to del.icio.us, but close enough). I don't put them on the front page of Crummy with a fancy Javascript widget or anything, because it is part of my personal notebook and I often use it to talk to myself in an invented language while doing research for something I'm writing.

But I don't mind you looking at our bookmarks, so long as you don't consider it a "publication" of mine, and there's so much interesting stuff that didn't make it into NYCB I thought I'd do a series of posts highlighting the most interesting bookmarks from 2008 that are still interesting and whose links still work. Here are the best links from January and February of 2008. I'll post one of these a day.

Best of Bookmarks, March-April: The image is from the Myspace page of Mike Howard (left), a friend of mine from Arvin High, taken during his tour of duty in Iraq. There were other, less ridiculous photos, but why choose one of those? I'd like to get back in touch with Mike but apparently not badly enough to create a Myspace account. Mike, if you see this, send me email.

[Comments] (1) Leonard Nitpicks The Christmas Songs:

A child, a child shivers in the cold
Let us bring him silver and gold

How about bringing him a blanket?

Beautiful Soup Progress #2: Another glorious vacation day squandered porting Beautiful Soup to Python 3 for you ungrateful sods! I have a script that runs 2to3-3.0 on the core codebase and applies a little patch of my own, and I've used it to fix almost all of the Unicode problems. We've still got some kind of problem with the search mechanism, and some problems with HTMLParser (?) differences involving how HTML entities and self-closing tags are handled between Python 2 and Python 3. I'm down to 15 failing tests in the converted code, without breaking any tests in the Python 2 version.

I think a couple people were confused by my earlier statement that you'd be able to "write a plugin for lxml [or] lib5html." I'm talking about using another parser to drive Beautiful Soup tree generation. Turning events generated by some other parser into a generic set of "start tag", "end tag" type events. Thus giving you an alternative to the okay-for-2004-but-not-for-2008 Beautiful Soup rules about parsing bad HTML, and eventually getting rid of those rules altogether, because I don't want to be in that business.

: I just found out that Peter Hodgson, who's been showing up in NYCB for over ten years, is actually Peter Hodgson the Younger, son of the man who made Silly Putty into a product. Peter the Younger went around propagandizing it.

In 1961, he introduced Silly Putty to thousands of excited Soviets in Gorky Park. "They went absolutely nuts when they saw it," said Peter.

: Don't forget about those last-minute gifts. (originally from 2004)

Christmas Chiptunes: This year everyone is a-twitter (and a-Twitter) about 8-Bit Jesus, the excellent album of Christmas carols done in the style of NES games, an album that has doubled in size since the last time I looked at its webpage. But if you can't bear to listen to music not synthesized by a 6502, it's not your only option.

A few years ago the twittering was about The 8bits of Christmas (that really does seem to be the best way to link to it; scroll down to "8bp038"). And this guy puts out a Christmas album every year. There's also this "8 Bit XMAS 2008" which actually comes on an NES cart. If you ask me the first two albums I linked to are the best, but nobody asks me these things.

Sumana listened to some of the music and said, "You won Christmas!"

[Comments] (7) Thoughtcrime Experiments:

As my Christmas present to the Internet I'm soliciting submissions for a new speculative fiction anthology, Thoughtcrime Experiments. Sumana will read slush, I'll select and edit the stories, and we'll publish online under a Creative Commons license.

If you're a spec-fic writer, and there's some story that people you've shown it to have liked, but that you've been unable to find a publisher for, send it to me. If I like the story enough to spend $200 on it, I'll buy it for $200. I hope to buy five stories. Details at the anthology page. Rationale follows.

I've been in a writing group for over a year, and read about thirty stories and a novel because of it. I've mentioned before that half the stories are "a rewrite away from publication quality". What does that mean, really? It means I could easily imagine reading those fifteen stories in a magazine or anthology. But of those fifteen stories, there are only two I want to buy for three cents a word and publish myself.[0] It's different when it's your own money.

Thoughtcrime Experiments is an an experiment with my own money. How hard is it to find five stories that I really want to buy? Given those stories, how hard is it to put together an anthology people will want to read? At the tor.com holiday meetup I cornered Patrick Nielsen Hayden and John Joseph Adams, and asked them whether my plan was feasible.

My takeaway from those conversations is that the toughest part of putting together an anthology is A) getting some "name" writers to promise to contribute stories to anchor the anthology[1], and B) getting a publishing deal.[2] Well, I don't need either of those to consider this project a success, so let's do this. Send your best unpublished story to thoughtcrime.experiments@gmail.com. I'll start promoting this anthology more in the new year, when people come back from vacation and start paying attention again, but it's open for submissions now.

I'm also going to try to write a story for John's current anthology, "Federations", and a pro market wants me to do a rewrite on a different story (yes!). So, busy vacation.

[0] I think one of those stories has sold, as has the novel.

[1] At VPXI Elizabeth Bear told the story of how she broke into the relatively-big time by filling in for a name writer who'd flaked out on an anthology promise. And then eventually found herself being the name writer and flaking out on anthology promises of her own, leaving room for other up-and-coming writers.

[2] Anthology creation, at least the way John does it, seems like you nail down the name contributors, pitch the book to a publisher using the name writers as collateral, put everything together, send out the checks, and submit the manuscript. For some reason I always imagined someone at the publisher coming up with the anthology idea and gathering the stories. But it's more like a novel that you subcontract out.

Best of Bookmarks, May-June:

[Comments] (1) Per Se: In an unprecedented splurge, Sumana and I are eating at Per Se on Sunday. Or as the domain name calls it, Perseny. Given what happened to Ed Levine when he ate at Per Se after comparing its cost on his weblog to Grey's Papaya (follow-up), I can only hope that I'll be presented with a genius dish based on Pac-Man, or a Beautiful Soup-themed soup. I'll take pictures.

Beautiful Soup Progress Report #3: OK, phase one is almost complete. There's just one test failure left in the generated Python 3 version, and I don't think it can be fixed; HTMLParser is just different between 2 and 3.

I haven't found a way in Python 2 code to indicate that a string should be converted into a byte string when the conversion script runs. In some cases I can stick .encode() onto the end and it works in both 2 and 3, but some of my tests have random binary data that's not in any particular encoding. And in some cases calling .encode() is just ugly. Kind of frustrating because about 40% of my test failures ultimately boiled down to marking such-and-such a string as a byte string. So I'd appreciate any ideas.

[Comments] (1) Best of Bookmarks, July-August:

Beautiful Soup 3.1.0: It's out. It's not useful unless you need to use Beautiful Soup with Python 3. But now I'm free to try my parser replacement experiments, which are at least more interesting than screwing around with bytestrings.

: While I'm at it I should mention there's now a Squeak port of Beautiful Soup.

[Comments] (1) Best of Bookmarks: September-October: I didn't really post any links for September due to travel, so I've made up for it by explaining these best-of links in more detail than usual.

[Comments] (1) Best of Bookmarks: November-December: At this point we're close enough to the present that I could just write weblog entries about these, but I press on. Happy 2008!

[Comments] (2) Maraudering Beatnicks: I put up a text dump from one of my favorite old PC games, Flightmare. Just because I felt like doing it. It was a fun game and its writing covers the spectrum of game humor: intentionally funny jokes and puns, over-the-top writing that you're not sure if it's supposed to be funny, and hilarious spelling errors.

[Comments] (6) Best Of Weblogs 2008: Keep looking back! This will probably be the last 2008 retrospective unless I decide to go through the books I read in 2008 or whatever. These are the syndication feeds I'm happiest I subscribed to in 2008. Caution: may contain nepotism.

[Comments] (2) : Sumana has decided that the opposite of fanfic is "slamfic".

[Comments] (2) Squeezing Out The Tube Of 2008: I read 44 books in 2008, about half of what I read in 2007. Kind of depressing given I've got about 130 unread books, enough to keep me busy for the next three years at this rate. On the other hand, I wrote six short stories, compared to two in 2007.

I'm generally dissatisfied with the quality of the books I read in 2008, so I'm only going to mention a few that stood out: The Making of the Atomic Bomb (crummy.com Book of the Year), Anathem (the only fiction on this list), Consciousness Explained, Jimmy Carter's presidential memoir, and Sumana's friend John Morearty's self-published autobiography. Runner-up award to George Saunders, I guess.

A question, a conundrum if you will, about the quality issue. I've got a number of excellent books that I haven't read because I know I'm going to want to keep them. Instead I tend to read huge books that I know I'll discard after reading, to free up the maximum space on my bookshelf. A good strategy if I had infinite time, but I'll only live so long and I want to read books with a higher expected value. So I ask you, how do I force myself to read the books I think will be super good keepers in preference to the ones I just think might be good?

One idea I just thought of is to pick out the 30 books with the lowest expected values and hide them in a box. More room on the unread-book bookshelf = less pressure. And maybe one day I can just get rid of that box and not feel like I lost anything. Any other ideas?

: Man, I can't get enough of that "2008" on the dates of these weblog entries!

<M <Y
Y> M>

[Main]

Unless otherwise noted, all content licensed by Leonard Richardson
under a Creative Commons License.