The Thing Growing:

Autonomous Characters in Virtual Reality Interactive Fiction


Josephine Anstey, Dave Pape, Dan Sandin
Electronic Visualization Laboratory
University of Illinois at Chicago, Chicago, IL 60607
{anstey, pape, dan}@evl.uic.edu}
 

Abstract

This paper describes "The Thing Growing", a work of Interactive Fiction implemented in virtual reality, in which the user is the main protagonist and interacts with computer controlled characters. This work of fiction depends on the user's emotional investment in the story and on her relationship to a central character, the Thing.
 
 

1. Introduction

In "Disney's Aladdin: First Steps Toward Storytelling in Virtual Reality," the authors say, "We believe that the content questions are the really hard ones." [1] In early 1996, when we first contemplated creating an Interactive Fiction work in VR, the contemporary literature focused heavily on the potential of hypertext and branching narratives. Laurel's IF, Interactive Fantasy System [2], proposed using intelligent agents to oversee plot development and interact with the user like characters. However, there seemed to be a preponderance of theory over practice. We concluded that different groups needed to experiment with specific solutions as they dealt with particular stories in a variety of computer environments, before a really useful lexicon of Interactive Fiction strategies could be developed.

"The Thing Growing", is one such experiment; an Interactive Fiction developed for VR. It has two main goals; the deployment of a virtual character with a developed personality and enough presence to engage the user believably; and the development of a story that establishes the user at its center.

In section 2, we will briefly describe other Interactive Fiction projects. In section 3 we will describe the organizing principles and implementation of "The Thing Growing" project. In section 4 we discuss evaluation, implications and future work.

2. Interactive Fiction

Interactive fiction exists in VR, in immersive installations, in computer games and on the web, but in this section we will only consider the first three. In every area a central issue is the creation of fictional characters. One strategy is to use other humans to supply this element - whether as actors or co-users. A second strategy is to create computer-controlled characters.

2.1 Immersive Installations and VR

"PLACEHOLDER" by Brenda Laurel, Rachel Strickland and Rob Tow was a "research project that explored a new paradigm for narrative action in virtual environments." [3] Two people in head-mounted displays could move about the environment interacting with each other and with four animated critters - Spider, Snake, Fish, Crow, - they could also become one of these characters. Both the critters and sound sources located in the environment told stories about themselves as the user approached. In addition, the character of a Goddess was improvised live either by Laurel or another collaborator.

At DisneyQuest two current narrative-based VR attractions are "Aladdin's Magic Carpet Ride", a head-mounted display experience, and "Hercules in the Underworld" which can be viewed in a CAVE-like system. In Aladdin you share the virtual environment with three other users whose avatars appear as monkeys on flying carpets. In Hercules four people share the CAVE-like space, each with their own joystick. There is no head tracking and each user is represented by an animated character projected in front of them. The stories of these two attractions are similar. At the beginning you collect objects - jewels and thunderbolts respectively - which you use in your quests - freeing a genie and defeating Hades.

Built in September 1996 MIT's KidsRoom [4] was an installation which immersed children in an interactive adventure. The initial space was designed like a bedroom. The children were tracked with computer vision and prompted to move about the space and accomplish tasks that moved them along a narrative path. The narrative was created with sound and on two video walls. The experience culminated with the children finding monsters and dancing with them

It/I, created in 1997 by Claudio Pinhanez and Aaron Bobick, is a play with one human actor and one virtual actor controlled automatically by a computer. The play is about the "?relations between mankind and technology.... It represents the technology surrounding and many times controlling us; that is in "It/I", the computer plays itself."[5] Although the main performances of It/I used a rehearsed human actor, at the end of the play audience members we invited to take the part of "I" and reenact a scene.

In "PLACEHOLDER" the user interacts with the environment and is told or tells stories, rather than interacting with a narrative. In the Disney productions, the narrative provides a context which effectively disguises the very little control you have as the story pulls you pell-mell to its conclusion. In KidsRoom the interaction is controlled by the simple, almost linear narrative rather than controlling it - a technique which we also use in "The Thing Growing." However, character development is very limited.

Both PLACEHOLDER and It/I use human actors. A very interesting area is evolving, especially in interactive art, in which networked participants or live performers involve the audience in an event, this approach seems more suitable for performance than Interactive Fiction. Unless trained as improvisers, unrehearsed participants are unlikely to create interesting fiction on the fly. Also, if you have human actors presenting the fiction - even interactively - why do you need a computer?

2.2 Narrative Based Games

In the LucasArts computer games "The Curse of Monkey Island" and "Grim Fandango", as in any plot driven fiction, a goal is established, then barriers are placed in its way. The games are divided into distinct scenes, within each one the user/protagonist has to solve problems that are, at best, completely contextualized by the narrative. Talking to other characters is a major element in both games. The user "speaks" by picking from a list of sentences. Although this interactive device is rather inelegant, there is artistry in putting hints and clues into these conversations, which put the user on the trail of both the problems and the solutions.

In both experiences the user plays the central character. As in early text-based games the user/ protagonist can pick objects up as he goes along. These objects, singly or in combination, can be used to solve problems. One criticism of these games is that they can be reduced to simply trying to use everything in your inventory on the situation/object/problem facing you. However, as objects in the inventory and situations proliferate it seems wiser to think out a possible solution. Another criticism is that sometimes the obstacles are not convincingly embedded in the plot and overcoming them merely becomes an exercise in puzzle solving.

These narrative-based games benefit from an informed audience and a lexicon of game norms which they can experiment with. They have characteristics which are  applicable to immersive VR, such as the contextualization of interaction; the use of other characters to guide, provide hints and companionship; and intelligence engines running underneath that keep track of time passing, the state of the world, and what the user has done. Some of the game norms could be directly imported to VR fiction, for example the carrying around of objects in an inventory.

2.3 Computer Controlled Characters

In 1996 Pausch et al said: "Artificially intelligent characters are an interesting concept, but it will be a long time before they are believable in any but the simplest background role."[1]

Several research teams have focused on using intelligent agents for interactive, narrative experiences. Barbara Hayes-Roth's "Virtual Theater for Children" uses complex computer characters that improvise a story under the direction of children. The characters have attitudes, moods and behaviors and their success is measured in part by "the creative surprises in a character's behavior."[6] The system is developed so that it is simple to insert different "minds" and "bodies".

Blumberg and Galyean have also built "autonomous animated creatures for interactive environments which are capable of being directed at multiple levels."[7] For the Alive project they developed an autonomous dog, Silas T. Dog, which believably interacts with the user in a 3D virtual world.

The Oz Project [8] focuses on both believable agents and interactive drama. They aim at creating highly interactive work, where the user is continuously deciding what to do and say rather than being given a small number of fixed choices. Their system also includes intelligent agents, and a story manager which subtly shapes the experience alone dramatic lines. Oz worlds include "The Woggles" and "Lyotard".

In these works the user is often addressed cerebrally - invited to create fiction with the agents, to direct them, or to marvel at simulations of autonomy and personality. These systems are impressive, the agents complex and believable. However, the kind of experiences that they have produced, for example interacting with Sylas T. Dog or Lyotard the cat do not necessarily produce nail-biting, fictional experiences for a user.

We feel that intelligent characters or systems can be used in ways that build more substantive fictional content. One key seems to be to care less about the absolute "smartness" of the agents and more about the relationship between the user and the agent. A second key may be to give the agent some power over the user!

3. The Thing Growing

"The Thing Growing" is a virtual reality Interactive Fiction. The term "fiction" covers a very large and fractious territory. However, one major trope, whatever the story elements, is a protagonist making an emotional journey. Our goal is to create an story in which the user is the main protagonist in such a journey. Our focus is the construction of the "Thing", a virtual character. The user engages at an emotional level with the Thing and its world. The project was developed in CAVE® VR at the Electronic Visualization Laboratory starting in 1997.

The impetus for "The Thing Growing" was a short story. In the story one of us (Anstey) wanted to describe a relationship that was cloying and claustrophobic but emotionally hard to escape. An immersive, interactive VR environment seemed an ideal medium to recreate the tensions and emotions of such a relationship. Someone reading a book or viewing a film or video may identify with the protagonist but in VR the relationship is more direct; the user is the protagonist.

3.1 Organizing Principles

In their article 1993, "Dramatic presence," Kelso, Weybrauch and Bates imagine the beginning of an interactive fiction and say

"This is a description of what a short segment of Interactive Drama should be like. You find yourself immersed in a fantasy world with exciting characters and the possibility of many adventures. Although you control your own direction by choosing each action you take, you are confident that your experience will be good, because a master interactive story-teller subtly controls your destiny." [9]
 
 
A fantasy world with the possibility of many adventures assumes a narrative that will branch in many directions, with a proliferation of plots, characters and scenes. Although this may one day be possible, when we were planning "The Thing Growing" we dismissed this type of branching narrative for two reasons.

First, we considered that it was too time-consuming and expensive to create a multiplicity of complex scenes in VR. Talin also makes this point in relation to the construction of "high-production-value graphical adventure" games [10]. Second, we wanted to take advantage of traditional narrative devices which garner much of their power from the control of pacing, surprise, and the construction of a rising curve of interest. Our thinking received support at the SIGGRAPH 99 panel, Fiction 2000, [11] where Andrew Glassner blasted the branching narrative strategy suggesting that too often it gives the user shallow choices which she is unlikely to care about. He suggested that, for the construction of interesting computer fiction, a contract has to be forged between the author and user. One point on that contract is that the author will control the sequencing of events, and the creation of a causal chain of action.

Therefore the narrative in "The Thing Growing" has the classical bridge structure of plays and films; act one introduces the protagonists and the goal; act two revolves around struggles to reach the goal; act 3 resolves those struggles. The difference in our case is that the user is one of the protagonists and in each act she is involved in interaction. The narrative as a whole is moved on either as a result of the user's actions, or by time.

Another point in Glassner's contract is that the author should have control over the psychological development of the main protagonists. Our model sharply diverged from this because we insisted that the user, whose psychological development we do not control, be one of the main characters. Our thinking here is much more in line with Jesse Schell who, on the same panel, argued that one element that keeps the user interested in a story is psychological proximity. By psychological proximity he means that our interest in an event - such as a stone hitting someone - varies incrementally if it happens to a stranger, if it happens to a friend, or if it happens to 'me'. We wanted our fiction to be something that happens to the user; we didn't want the application to simply tell her a story, we wanted it to implicate her in a chain of events.

To effect this we created a computer-controlled character, The Thing, programmed with a multiplicity of reactions, to play opposite the user. Here, in effect, we reintroduce the concept of branching - but instead of branching narrative trails, we put all that complexity into one character who has branching behaviors.

The need for such a character was also content-driven. The theme of our story is dysfunctional relationships. In the Thing we created a character who is initially lovable and loving, but becomes clinging and bossy over time. We program it to be in the user's space both physically and mentally (through the use of speech). It has more power in the VE than the user and she is forced to deal with it (or step out of the experience). Although interactive experiences are provided in each scene of the narrative, they are there mainly to facilitate the development of an emotional interaction between the user and the Thing. Human beings react emotionally to cars and computers. We assumed that they would be equally willing to react to a computer creature that itself appears emotional and directly solicits an emotional response.

Of course it would be impossible to program the Thing with a response to any possible action of the user. Nor did we believe that the experience would be more interesting if the user was completely in control. In Computers as Theater Brenda Laurel comments:

"A system in which people are encouraged to do whatever they want will probably not produce pleasant experiences." [2]
Laurel introduces the idea of constraints that can be built invisibly into the activity or experience.
"If the escape key is defined as a self-destruct mechanism, for instance, the constraint against pressing it in the course of flying a mimetic spaceship is intrinsic to the action." [2]
We used our narrative as a constraint. The construction of the Thing's personality within the narrative is one way in which we control the user. The Thing is dominating because of the exigencies of the story and constantly tells the user what to do. The user's choice at the simplest level is to do what it says or to disobey. Given that we have set up the parameters of either, it is far easier to program a response. Although the user may become annoyed at the bossiness of the Thing, she is less likely to attribute her frustration to the limits of the program than to the character she is dealing with. Similarly the narrative context limits the number of responses we need to create for the Thing. We discuss this in section 3.4.

We had seen many CAVE VR projects where a human guide was necessary to explain the interactivity and even the best way of navigating a virtual environment. We did not want the user's connection to our environment broken by any outside advice. Therefore the application has to be smart enough to give any necessary instructions. We used voice-over and speeches by the Thing or other objects in the experience to suggest courses of action for the user and to give instruction on how to manipulate the environment. For example a key that can unlock a box prompts the user by prissily saying, "You'll have to click on me, won't you."

3.2 The Narrative and Interaction

In the first act of "The Thing Growing", the user finds herself on a large plain. A voice-over prompts her to go to a shed. Inside is a box. If the user opens the box, it bursts open and the Thing leaps out. It dances around and shouts, "I'm free! You freed me! I love you!" The two protagonists, the user and the Thing are introduced and the Thing declares its interest in the user.

In the second act the Thing tells the user that it is going to teach her a special dance; a dance for the two of them. In this act the interaction is designed to be so natural as to be invisible, and involves the user's whole body rather than any interface device. The Thing demonstrates a dance step and asks the user to copy it. The Thing praises or criticizes the dance. If the user gets fed up and navigates away the Thing runs after her and coaxes, whines or threatens her into continuing to dance.

Figure 1: A user dances with the Thing in the CAVE

Act One introduces the user to the environment and familiarizes her with the wand. It also introduces the Thing. Because the user frees the Thing and is loved for doing so, the ideal user also feels warmth and a sense of satisfaction for doing good. However, in Act Two the Thing progressively reveals that it is dominating and controlling. At first it praises the user's dancing, later it begins to nit-pick and complain that the user isn't really trying. The user feels increasingly invaded by the Thing, which is always a little too close for comfort, and grows sick of it. When it finally flies into a temper and runs off, the user is relieved.

However, the relief is short-lived. Once the Thing has gone, rocks on the plain come alive and herd and stalk the user. One of them rears up and traps her. Seconds later the Thing arrives to tell the user that it will get her out from under the rock if she is nice to it. If the user shows a willingness to dance, she is released. The Thing, brightly announces that now they can begin to whole dance again. Almost universally users groan when they hear this. However, as an added incentive, the Thing will now copy the user's movements and let her create some of the dance steps.

Act Three begins as lightning crackles across the plain and a god-like voice asks what is happening. A bolt of lightning cracks at the user's feet and the earth opens. She and the Thing fall into a new, darker environment and the user is immediately caged. The two are welcomed by the Thing's four cousins, but the Thing is frightened and whispers that the cousins are fanatics, furious that it and the user have been dancing a sacred dance together. Our goal in this Act is to put the Thing and the user on one side against a common enemy.

Figure 2: The cousins welcome the user and the Thing

The Thing is right to be frightened of the cousins. They beat it up and denounce it for engaging in a relationship with a meat object - the user. They toss the Thing into the cage with the user and exit mouthing dark threats. The Thing produces a gun, and it becomes the user's job to blast them out of prison and then to kill the cousins. The user is usually only too willing to run about and shoot at the cousins. They are evidently "baddies", besides which it is a moment of agency for the user who, up to now, has merely been trapped and bullied.

Finally all the cousins are killed or have escaped. The Thing and the user are alone again. But now the user has a gun. The entire piece is designed for this moment. The Thing suddenly realizes that the user could turn the gun on it. The question for the user is should she kill the Thing or not?

There are two endings, one for each alternative. However, neither allows the user to ultimately escape the trap of this clinging relationship.

3.3 Basic System

The Thing Growing was constructed for CAVE VR, and has been shown to date in CAVEs, on ImmersaDesksTM, and in a Panoram curved screen display; it could also be experienced in an HMD. The user interacts with the application a wand. The user's head and two hands and sometimes body are tracked. The user is required both to navigate through the world using the joystick on the wand, and to move arms, head and body physically.

The virtual environment was built using XP, a VR authoring toolkit based on C++ and IRIS Performer, which was designed to facilitate the construction of art applications in the CAVE by both programmers and non-programmers [12]. The toolkit handles a number of activities common to VR environments, such as assembling objects into a world, collision detection, navigation, and detecting events and passing messages in response to them. It provides a framework for extension; special, application-specific classes may be added to define new, more advanced behaviors for objects or characters. Non-programmers can use a hierarchical text file and scripting system to rapidly develop virtual scenes. These files lay out the scene, and define messages to be passed between objects.

The narrative structure was created with the text files and scripts. Timed sequences were intercut with the interactive episodes. The narrative flow as a whole was structured using triggers based on time, user proximity, or the completion of specific events. The text file serves as production manager for the story, which can therefore be easily edited and changed.

We extended the basic system to build the intelligence of the Thing, the main virtual character, and also to program special behavior for objects such as the rocks that chase the user.

3.4 Constructing the Virtual Character

The Thing has a body (motor component) and a brain (cognitive/perceptual component). In this case the body is composed of multi-colored translucent pyramids, however, other, more complex, body-part models could be substituted. Arms, head, body and tail are animated with motion tracking. In this case the pyramids do not connect - the life-like movement that results from the motion tracking creates a strong illusion of an autonomous being formed from a collection of primitive shapes - the illusion is not broken by parts of the body joining badly.

The Thing's voice is pre-recorded. Based on the storyboard, we recorded hundreds of phrases for its voice. Sometimes its speeches are scripted and do not vary - for example when it is freed from the box it is trapped in. But mostly it speaks in response to the user - for example when it is teaching the user to dance. For this section we recorded different versions of words of praise, encouragement, criticism, explanation. We also recorded the different types of utterance in different moods; happy, manic, sad and angry. Each phrase lasts a few seconds.

Figure 3: The Thing

Body movements are captured while the phrases play until we build up a library of actions (Action = Movement + Phrase). In addition to the motion-captured movement for each body part, we also need to determine a movement for the body as a whole. Depending on the circumstances the Thing may move relative to the user or relative to the environment. Therefore each action also contains information about what global body movement goes with the specific body-part movement and phrase (Action = Body Part Movement + Global Movement + Phrase). If appropriate, the mood of the action is also contained in the action. (Action = Body Part Movement + Global Movement + Phrase + Mood). All the actions are stored and can be accessed by the brain. The actions are added in the text file. The different elements of the actions can be assembled there, so it is very simple to modify, add or remove actions and essentially edit the Thing's behavior.

The brain's main perceptual input is information from the tracking system about the user's body movements and use of the wand. It uses this information in conjunction with information about the state of the world and the passing of time. The main job of the brain is to select an appropriate action from its stores, according to the point in the narrative; the user's actions; and the Thing's internal state. As the program runs, the body interpolates between the end of one action and the beginning of the next, so that the movement between actions is fluid.

In order to be quickly able to respond to changing situations, the brain has several basic strategies. Certain state changes will send it a message to interrupt its current action - the specific state change will also send an additional message to indicate which kind of action should now be picked. Otherwise the brain will complete its action, go through a series of checks on the state of the world and the user, and if none of these trigger alternative actions follow an internal set of rules for selecting the next action.

The narrative becomes a very useful tool for constraining the kind of action the brain can pick, thus simplifying the rule structure. For example when the Thing is attempting to teach the user to dance, it has a basic routine to follow. It demonstrates each part of the dance, then observes or joins the user as she copies the movement. Information on whether the user is dancing correctly is recorded so it can be accessed by the brain's checking system. The Thing may admonish, encourage or praise the user according to her behavior and its own mood. It may repeat a part of the dance that the user is doing incorrectly or it may teach another step. This routine is interrupted if the user tries to run away and behavior is triggered to make the Thing run after the user and plead with or scold her to continue the dance. Each type of response - "user_danced_well, user_ran_away, new_dance_step" - corresponds to a store filled with possible actions. The brain can pull an action out of the store sequentially - for scripted moments in the story, - or randomly, or by mood.

3.5 Networked Thing and Autonomous Thing

Our intention had always been to make the Thing entirely autonomous. However, we built the Thing's body and the basic routine to teach the dance before writing the checking system that would use the tracking data from the user to judge how well they were dancing. We were also unsure how to proceed with changing the Thing's moods. Therefore for SIGGRAPH 98, as an interim step, we built a networked version of the project, which effectively gave us a "Wizard of Oz" brain. A networked user was an invisible voyeur on the scene between the Thing and an avatar of the participant. They used a menu to tell the Thing if the participant was dancing well or not, and also to control its moods (the Thing can be happy, angry, sad, or manic).

In this scenario, although the Thing had its in-built routines of behavior it was also getting help from a much more flexible intelligence system, with a wealth of fine-tuned interactive experience! More importantly, the task of building its intelligence later was greatly simplified by the observations we made during the shows. We observed both the users? reactions to the character, and our own behavior when we played Wizard of Oz.

First, we fell into a fairly standard way of altering the Thing's moods. The dancing interaction lasts for 2-3 minutes. The finale is the Thing running off in a huff. Essentially the mood changes from good to bad over time. If the user is uncooperative - refuses to dance, runs away a lot - the Thing becomes whiny or angry quicker. Second, users had a fairly standard way of reacting to the Thing. They either tried to obey it, or refused to dance and tried to get way from it. Those that tried to dance, varied widely between people who would copy exactly and those too shy to move very freely - as the Wizard of Oz we tended to treat these alike to encourage the timid.

We built an autonomous dancing Thing based on these observations. Its mood changes based on time and the level of user co-operation. We assume any arm movement that travels more than an arbitrary minimum distance indicates an attempt to dance. We do not bother to check each dance movement separately and precisely to make sure that the user is doing a specific move. Over time the Thing becomes randomly pickier, criticizing and rejecting movements that earlier it would have praised. In response the users watch more carefully and refine their dancing. The completely autonomous dancing Thing has run successfully at the Virtuality and Interactivity show in Florence, May 1999, at SIGGRAPH 99 and at the Ars Electronica Festival in Linz, September 1999.

This process of faking the intelligence of the virtual actor by substituting a human for its brain and running user tests was very valuable. Our focus was not on how clever our intelligent agent was, but on how successfully it facilitated this particular story; manipulated the user; and was accepted by the user as a co-being in the VR. User testing was the only way to check and refine our progress. The insights we gained into typical user patterns of behavior enabled us to build up the Thing?s responses where it mattered most. It also led us to only build intelligence where it was needed.

4 Evaluation, Implications and Future Work

4.1 Evaluation

Since the summer of 1998 we have shown the project in a variety of venues, with audiences that vary from very sophisticated to relatively naive. At each show we evaluate both the form and content of the application by directly observing the users in the VE, and by talking to them afterwards.

Our intention is that the application should be self-explanatory and after a brief introduction to the functionality of the wand, we tell the user to start the application and then listen carefully, it will tell them what to do. All English-speaking users have been able to navigate the application and interact appropriately. The greatest problem area has been persuading people that they must use their physical bodies to dance. Some users - especially the more expert - simply want to manipulate the VE with the joystick. Users with some familiarity with a joystick but less preconceptions about VR will quite fluidly switch between navigating with the joystick and moving their own bodies. We have added explicit instructions from the Thing to counter this problem.

Confirmation that the Thing is a believable character comes as the users talk back to it. We have also been told that it has "presence"; that it is "oddly feminine"; that it is like a "whiny child"; and "like people we know". We have been concerned that it might appear too unthreatening and child-like since it is a very simple creature visually - but users have confirmed that they feel uncomfortable with it, it gets too close, and that it is therefore threatening. At this point we are still fine-tuning the character in order to make more users feel more ambiguous about it; currently most people are too eager to kill it. Contrary to Pausch et al who say "We suspect that the limited believability of our first system?s characters is due to low fidelity." [1], our experience is that a creature that is visually very abstract and simple can be very convincing because of its natural movement and responsive behavior.

Response from users have led us to clarify the story-line, specifically by making it clearer that the cousins are horrified at relationship between the Thing and the user, and by altering the timing in the section where the cousins capture the user and judge the Thing. At this point we believe that the story is conveyed successfully, and that the users consider themselves part of it with a reasonable degree of free will.

4.2 Implications

We are not convinced that giving the user more control and more choice is the way to create compelling interactive fiction. Often the power and excitement of traditional fiction lies in the lack of control the protagonist has: suddenly she is propelled out of her normal world and forced into an adventure.

We think the implication of "The Thing Growing" and other projects mentioned in this paper is that a lexicon of strategies is emerging for a strand of Interactive Fiction where the control of the author is high. The commonalities are:

Michael Mateus writes: Believable agents are designed to strongly express a personality, not fool the viewer into thinking they are human.[8]

However, we contend that the purpose of our character is to "fool" the user into reacting to it as if to a human, in the interests of conveying the work of fiction. Similarly we believe this strand of IF should be more concerned with ways of giving the semblance of control and choice to the user than actual control. As Brenda Laurel says:

"There is another, more rudimentary measure of interactivity: You either feel yourself to be participating in the ongoing action of the representation or you don't." [2]
The kind of fictional experience we are proposing also requires a measure of willingness on the part of the users. Many art forms require the audience's suspension of disbelief. Interactive forms may sometimes require the participants to suspend their own will. They may have to be willing to act and interact in ways that further the piece - (pick up hints about the interaction, the rules, the interface) - for the richest experience of the piece. This echoes the third point in Andrew Glassner's contract between author and audience. The audience must let themselves be manipulated on an intellectual, emotional and spiritual level. If they don't, the story doesn't have a chance to move them.

As examples of VR Interactive Fiction increase, and VR reaches more people, not just in laboratories but in public spaces, we assume a more sophisticated audience will be ready to let itself be manipulated.

4.3 Future Work

A common problem is how to get the user's input into the interactive fiction in any complex way. The only input in our system is tracking information. In VIDEOPLACE [13], Myron Krueger explored virtually the whole range of physical interaction that can be stimulated between a user and the computer based on information about the user's position. By combining tracking information with the narrative context we attempted to infer more about our user's state of mind and respond to it. However, more ways of sensing the user would help us to develop more complex interaction. As a first step we would like to include some voice input from the user - at first perhaps only the recognition of volume and tone.

"The Thing Growing" has served as a simple prototype for developing VRIF with computer controlled characters. This application, the story and the virtual character, were very specific, but the process of production has indicated ways to develop the system and to make generic tools for creating such VRIFs. These tools will make it easier to create and edit virtual environments, narrative sequences, interaction and virtual characters, in order to make increasingly complex fictions with multiple characters, and maybe, one day, a multiplicity of branching narrative paths.

Acknowledgments

The virtual reality research, collaborations, and outreach programs at the Electronic Visualization Laboratory (EVL) at the University of Illinois at Chicago are made possible by major funding from the National Science Foundation (NSF), awards EIA-9802090, EIA-9871058, ANI-9712283, ANI-9730202, and ACI-9418068, as well as NSF Partnerships for Advanced Computational Infrastructure (PACI) cooperative agreement ACI-9619019 to the National Computational Science Alliance. EVL also receives major funding from the US Department of Energy (DOE), awards 99ER25388 and 99ER25405, as well as support from the DOE's Accelerated Strategic Computing Initiative (ASCI) Data and Visualization Corridor program. CAVE and ImmersaDesk are trademarks of the Board of Trustees of the University of Illinois.

References

[1] R. Pausch, J. Snoddy, R. Taylor, S. Watson, and E. Haseltine, "Disney's Aladdin:First Steps Toward Storytelling in Virtual Reality," In Proceedings of SIGGRAPH ?96, ACM SIGGRAPH, New Orleans, LA, Aug 4-9, 1996, pp. 193-203 [2] Laurel, B. Computers as Theater, Addison-Wesley, 1993 [3] B. Laurel, R. Strickland, and R. Tow, "PLACEHOLDER: Landscape and Narrative in Virtual Environments," Digital Illusion, editor Clark Dodsworth Jr, ACM Press, New York, NY, 1998, pp. 181 - 208

[4] A Bobick, S. Intille, J. Davis, F. Baird, C Pinhanez, L. Campbell, Y. Ivanov, A. Schütte, A. Wilson. "The KidsRoom: A Perceptually-Based Interactive and Immersive Story Environment," Presence Vol 8, Number 4, August 1999. pp. 367-391

[5] C. Pinhanez and A. Bobick, "It/I:Theater with an Automatic and Reactive Computer Graphics Character," SIGGRAPH ?98 Conference Abstracts and Applications, ACM SIGGRAPH, Orlando, FL, Aug 1998, p. 302

[6] B. Hayes Roth, L. Brownston, E. Sincoff, "Directed Improvisation by Computer Characters," Stanford Knowledge Systems Laboratory Report KSL-95-04, 1995.

[7] B. Blumberg, and T. Galyean, "Multi-Level Direction of Autonomous Creatures for Real-Time Virtual Environments," In Proceedings of SIGGRAPH 95, ACM SIGGRAPH, Los Angeles, CA, August 1995. pp. 47-54.

[8] M. Mateas, "An Oz-Centric Review of Interactive Drama and Believable Agents," Technical Report CMU-CS-97-156, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA. June 1997.

[9] M. Kelso, P. Weybrauch, J. Bates, "Dramatic Presence," Presence, Vol 2, Number 1, Winter 1993, pp. 1-15

[10] Talin, "Real Interactivity in Interactive Entertainment," Digital Illusion, editor Clark Dodsworth Jr, ACM Press, New York, NY, 1998, pp. 151 - 159

[11] A. Glassner, C. Wong, "Fiction 2000:Technology, Tradition, and the Essence of Story," SIGGRAPH ?99 Conference Abstracts and Applications, ACM SIGGRAPH, Los Angeles, CA, Aug 1999, p. 161

[12] D. Pape, T. Imai, J. Anstey, M. Roussou, T. DeFanti, "XP: An Authoring System for Immersive Art Exhibitions," In Proceedings of Fourth International Conference on Virtual Systems and Multimedia, Gifu, Japan, Nov 18-20, 1998

[13] Krueger, M. Artificial Reality II, Addison-Wesley, 1991