Versu Relaunched

Back in February of 2014 Emily Short announced that Linden Labs was no longer supporting Versu. The future of this interactive storytelling tool looked grim.

Yesterday, Emily announced on the new Versu site that she and the other creators of the technology have reached an agreement with Linden Labs.

After Versu’s cancellation, it looked for a long time as though neither the underlying technology nor the finished stories had a future. However, we are delighted to be able to announce that Linden Lab has negotiated a new arrangement that will allow us to release these stories and explore a future for the engine.

You can read the post in its entirety here.

This entry was posted in Tools. Bookmark the permalink.

11 Responses to Versu Relaunched

  1. Gene says:

    This is great news. Really happy to hear it. Hope the engine is made public at some point for us all to investigate.

  2. crawfordchris says:

    This is great news, and I am eager to see what their work looks like. I’ll be grabbing up a copy of Blood & Laurels. I’ll post my reactions to it. You can expect my usual flinty-eyed reaction.

  3. Joseph Limbaugh says:


  4. Alex Vostrov says:

    This is amazingly good news. I will be grabbing a copy of Blood & Laurels next Thursday.

    After Versu went into a coma, I asked Emily about how it worked and she pointed out some papers that Richard Evans wrote. Having read them, I think that Versu gets one thing amazingly right.

    If we’re going to make bits of code feel like “real characters”, we’re going to need to describe them in the same way that people think – in terns of beliefs/desires/rationality. The great thing about Versu is that you can tell the engine: Bob wants to go on a date with Betty. Betty wants to Joe to fall in love with her. The engine then can figure out what actions need to be taken to reach those desires.

    My hunch is that the belief/desire/rationality triad is a key concept and Versu certainly has the right idea about it.

    • oneconch says:

      This is an interesting idea. Beliefs and desires are self-explanatory, but what do you refer to with the axis of rationality?

      • Alex Vostrov says:

        Simply the fact that actions have to be connected rationally to beliefs/desires. That is, they have to further those desires in light of the beliefs. If they aren’t connected then we can’t predict what people will do. That’s why, irrational behaviour makes people very very uncomfortable in social settings.

        You might want to check this out:

      • Alex Vostrov says:

        Let me add a concrete example of why rationality is an important concept. Storytron was structured in such a way that authors had to code choice (inclination) scripts from scratch every time. The author would have to consider what in-world variables and relationships were relevant to the choice and to invent the appropriate formula to factor all of that in.

        In contrast, Versu models rationality as a single-ply search in the world-state. Imagine that Daniel is enraged at Sam and desires to kill him (perhaps as a part of a vendetta). Versu would simulate the outcomes of all actions that Daniel could do – maybe shooting Sam is one possible verb. Versu would see that shooting Sam would kill him, satisfying the desire for Sam to die. So, the character would give priority to the “shoot” action.

        The point is that Versu has the rationality baked-in, while Storytron put that work on the shoulders of the author. Rationality is a fundamental human trait – why should authors have to re-invent it over and over again? That’s like Shakespeare having to work out that Hamlet has a liver each scene.

        Now, is Versu the ultimate example of what we need? No – single-ply search can be quite stupid. Real people are capable of stunning leaps of logic, intuition, inference and creativity. Just think of how amazingly well we can detect lies. That’s the direction we need to head in, but it’s a very steep climb. Still, I believe that character rationality is something that should be a core concept for a storytelling system.

  5. crawfordchris says:

    Alex, I have serious problems with the system you describe as “rational”. The AI people have a number of names for it, including “intentionality”. The concept is fundamentally flavored by AI. It is derived from the same technologies that guide the Mars rovers across its surface.
    Let’s take the case of Daniel considering his options with respect to Sam. In fact, let’s consider an entire ensemble of cases of people who are contemplating actions with respect to other people.
    In drama, one actor could easily shoot another actor without intending to, at least in the rational sense. That is, somebody could be so angry that they’d shoot, and then realize that they didn’t really mean to kill. Other people might have a very good reason to kill, but balk at the last minute when they realize the magnitude of their action.
    In other words, in drama, *nobody plans anything*, in the rational sense of the word. Much of what makes drama interesting is the interplay of emotions.
    There’s a beautiful example of this in one of my favorite movies, Brave. Mom (Elinor) confronts daughter (Merida) over her rebellious behavior. They’re both quite angry and exchange hot words. Merida is so angry that she grabs a sword and, to make a point, slashes a tapestry showing the family, separating image on the tapestry from her mother’s. Elinor is shocked beyond words and suddenly grabs Merida’s beloved bow and hurls it into the fire. Merida is overwhelmed with grief and runs away. Elinor instantly regrets her action and tries to fetch the bow out of the fire.
    I maintain that the type of rationality handled well be these AI systems is useless in handling the incident in Brave. Neither actor in Brave “intended” to carry out their hurtful actions, yet they did.
    Indeed, I’ll go so far as to say that this kind of AI is useless for interactive storytelling. I realize that’s a big claim, and I’m not sure that I haven’t overstepped, but I really do have a problem figuring out a situation in which it would truly add to the narrative.
    Well, I suppose that this kind of situation might be usefully served by an intentionality engine: Joe wants to kill Fred surreptitiously, so he resolves to poison Fred. The intentionality engine could deduce that first he must obtain the poison, so it directs him to go to the old hag who knows about poisons. She tells him to bring her a particular herb from the forest. So now the intentionality engine tells Joe how to travel to the forest and search. Eventually he finds the herb, and the intentionality engine has him bring it back to the hag. The hag makes the poison for him, and now the intentionality engine tells him to put the poison into Fred’s wine. And so on. The thing is, these actions are not the heart of narrative. Yes, they are useful elements of a story, but they are nowhere near adequate. A story operated this way reads more like an instruction book.
    Moreover, I cannot believe that their intentionality engine goes only one ply deep. C’mon, let’s get serious here! I have been trying to come up with an anticipatory engine for years, and I was giving serious consideration to searches going ten plies deep — and that’s with many numerical calculations going on in every consideration. Surely using simple boolean logic they can go much deeper.
    I am not defending the approach used in Storytron; I agree that it is entirely too cumbersome to be useful. However, I insist that actor behavior in narrative should be driven primarily by emotional rather than logical considerations.

    • Alex Vostrov says:

      I’ve looked at the Versu paper again, and here’s what is says (

      The NPCs in Versu look at the actual consequences of an
      action when deciding what to do. When considering an action,
      they actually execute the results of the action—rather than some
      crude approximation. Then they evaluate this future world state
      with respect to their desires. Then we undo the consequences of
      the world-state.

      This sort of decision-making is broad rather than deep. It
      doesn’t look at the long-term consequences of of an action—but
      it looks at all the short-term consequences. By looking at a broad
      range of features, it is able to make decisions which would typically
      only be available to long-term planners.

      You’re right that a lot of drama is driven by emotional impulses. In fact, if we were to get emotional computing up to par with human cognition, we’d have our job done.

      Here’s the problem, human emotional response actually has two parts: (1) The character responds to the situation according to their emotions – this is what you talk about. But, (2) people’s emotions are actually formed by the anticipation of consequences.

      Bob yells at Alice and she flinches. Why? Maybe Bob has been violent in the past. On the other hand, maybe Bob is known to be “all bark and no bite”. In that case the emotion would be contempt and not fear. This is why Versu’s short-term planner works – if you bake all the emotions into the action, the experience is there already.

      I don’t really know how to do (2) – some sort of crazy inference engine? I know that we can approximate it with a planning AI. The goal is not to have every character be a Machiavellian puppeteer thinking 10 moves ahead. The goal is to have them act consistently with their motivation. That means giving them the ability to anticipate the effects of their actions.

      Anyway, all of this is theory. I’m going to make a prototype shortly to evaluate this idea and we can see if I’m right.

    • Alex Vostrov says:

      One more thing. I agree that the “Murder -> Poison -> Hag” chain is rather boring. But I can think of a much more dramatic chain:

      – I want to be Emperor of Rome, so I should kill Caesar
      – He’s rather popular, so I’ll need co-conspirators
      – Why don’t I persuade Brutus to join me?
      – I know, I’ll flatter his ego, since he’s proud

    • Alex Vostrov says:

      Chris’s objection to intentionality – “in drama, nobody plans anything” – has been seriously gnawing on me today. It’s a really good counter-argument and a lot of drama is really driven by impulses and not plans. Does Romeo poison himself as a part of a clever plan? Not quite. Does that mean that intentionality is fatally flawed? Maybe, but I think not. (By the way, intentionality is a philosophy and psychology concept, AI just borrowed it)

      Suppose that paramedics rescue Romeo, pump the poison out of his system and 5 years later he’s being interviewed on Oprah about his best-selling memoir “Montague and Capulet”. He’s asked “Why in the world did you drink the poison? Didn’t you have plans, dreams, family who loved you?” Would it be too incredible to hear him reply:

      “You know, when I saw my dear Juliet dead, there was nothing that I wanted more than to die myself.”

      I think that it’s a rather plausible explanation of Romeo’s motivation, and it’s still done in terms of desires and beliefs. Emotion makes us more impulsive, it changes our current desires (“When I saw his smug face looking at me, I just knew I had to punch it, even if I would lose the job.”). Emotion doesn’t necessarily invalidate the framework – it just warps it.

      The other thing to remember is that emotional explosions are the high point of most dramas. Having constant argument would be draining and ultimately boring. Most of the time characters act with a reasonable level of foresight. After all, Romeo buys the poison *before* he sees Juliet’s body. It’s impulsive, but it’s also premeditated.

      In fact, a good story should both have emotional supernovas and cooler moments when people look around and say “what the heck have we done”. Thanks to Chris for helping me realize that.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s