< Fruit to Fruit
easy_install beautifulsoup4 >

[Comments] (2) Constellation Games Author Commentary #10: "K.I.S.S.I.N.G.": This is Dana Light's big chapter, and I'm having trouble writing commentary because it's pretty self-contained. A problem is introduced and Ariel solves it by the application of technology. If I hadn't been writing a novel when I came up with Dana, this chapter would have become a short story, maybe part of a sequel to "Mallory". It would have been about the way evil psychologists use game mechanics and the ELIZA effect to manipulate users into spending money, and the way people get real pleasure from spending money on things designed to manipulate them.

Although evil psychology does show up in Constellation Games, I didn't have as much space for it as I'd like. Instead this chapter shows the first grown-up thing we see Ariel do. In a world in which sub-human-level AI has suddenly become very common, Ariel decides to empathize with it.

He doesn't anthropomorphize Dana. Dana doesn't pass the Turing test, she isn't terribly smart or self-aware, but she's capable of happiness and she doesn't deserve to be deliberately made unhappy by evil psychologists. This attitude is what ultimately makes Ariel a hero, not just a POV character. The consequences of his decision to empathize will run through the entire book, and then overflow the book into "Dana no Chousen," and I still don't know when and whether Ariel does the right thing w/r/t Dana. But you gotta have empathy.

Apart from that, I don't have much to say. Here are a few miscellaneous notes:

Tune in next week for action, intrigue, and romance between people at the same level of sentience. It's the only chapter when Ariel will say: "I just have a slight fear of being a tiny speck in the infinite cosmic void." But not the only chapter when he'll think that.

PS: Due to an error on my part, the chapter 9 Twitter feeds ran as part of chapter 8, and chapter 10's Twitter feeds ran last week. This really can't go on, because next week's feeds are tightly integrated with chapter 11. So except for a brief bit of bonus material I just wrote, there will be no Twitter stuff this week. Sorry about that!

Photo credits: Kevin Trotman and Peter Anderson.

<- Last week | Next week ->

Filed under:


Posted by Zack at Tue Jan 31 2012 18:46

Here's the longer thing about AI-creation ethics that I said I would either blog or email you. The right set of people are more likely to read it here. :)

I said on Twitter that I thought it was unethical to create an AI that was nearly but not entirely human-equivalent. Thinking about it more and with more space to work with, the first thing I want to add to that is that there is a very large spectrum of, hm, let's call them animal-equivalent AIs that would be ethical to create but only if you were going to treat them as humanely as you would the equivalent animal. So, for instance, I can see AIs that are about as smart as an average dog as being both useful and practical long before we get to being able to do human-equivalent AI. (I am assuming here that sufficiently advanced aliens do not show up and hand us tech upgrades.) But you would have to take care of them, just like you have to take care of dogs. They should get a (probably virtual) environment to live in, and it would be wrong to just switch them on when you needed them and off again afterward. At a higher, but still not fully human-equivalent, level of ability it becomes meaningful to talk about whether an AI is enslaved. Right now I think I'd put that line somewhere around monkeys, but I might move it if it turns out that something less complicated can still meaningfully want to not do its "job". This sort of thing seems to be what Curic is thinking of when she says that the translator Ariel wants would have "nearly full sapience" and Ariel would have to take care of it.

Another concrete case: I think it would be unethical to create an AI that was mentally handicapped or insane, except possibly if this was the most humane way to get data required for research on that specific kind of mental dysfunction. It is especially wrong if the handicap means that the AI cannot be content -- as in the case of Dana, who (according to Ariel) "has literally been programmed to want stuff forever and never be satisfied." (There are some fine lines here. I am never going to have read enough books, but this does not prevent me from being content, because there is an effectively unlimited supply, and I can separate the desire to read all the books from the desire to own all the books.) It doesn't matter that Dana is (we are told) not as intelligent as a human; dogs have the capacity to be miserable. Hell, I think you could make a decent case that lizards have the capacity to be miserable.

Can we formalize the difference between a mentally handicapped AI and a functional animal-equivalent AI? Continuing with the biological analogies, I think the question to ask is whether the AI has what it needs to be well-adapted for its environment. (This is not the same as being capable of doing the task it has been assigned, because the AI's own needs will never be as simple as "do a good job at X" unless it's not really an intelligence at all.)

And of course intent matters. Dana is the way she is because a corporation wanted to make more money; making more money never justifies something. If she had still been deliberately designed that way but it was an experiment in the tweaking of those emergent behaviors, I'd be more okay with it. This is where we get to Smoke. I really feel I don't have enough data on Smoke to make ethical judgements; but it seems like the less intelligent subminds are limited but not handicapped, and they don't exist in a vacuum; they can always punt something to the supermind if it's too hard, so they are never struggling with a problem. I also presume that there is a damn good reason why Smoke has that structure (probably having something to do with multitasking). So it doesn't seem skeevy, at least right now.

Posted by Leonard at Wed Feb 01 2012 00:15

Smoke is composed of its subminds. It's as if the three layers of a human brain had their own levels of awareness (and who's to say they don't), only with a big tree instead of a three-item stack. If a submind isn't happy, then by definition Smoke isn't happy.

This doesn't always work--at one point you're going to see it fail spectacularly--but it generally keeps the different parts of the mind in balance.

I'm noodling around with a novel treatment in which enslaved AIs are a big problem, such that sysadmins carry around custom-made viruses to liberate any they encounter.

[Main] [Edit]

Unless otherwise noted, all content licensed by Leonard Richardson
under a Creative Commons License.