 |
The user at the idesk2 is attached to four trackers, so
that her head, two arms and body movements can be seen at the other desk
- the tracking information is fed to the avatar displayed on the idesk3. |
 |
The application starts with a rotating cube indicating the
title of the piece and that it is "in process". The user has
to click on the cube to start - and this pre-app time was used to get the
user used to navigating and using the wand buttons. |
 |
When the button is clicked the user arrives on the plain,
proceeds to the shed and releases the Thing from the box. The Thing then
demands that the user dance with it. It gives feedback to the user depending
if she dances well, badly or not at all. The feedback varies from high
praise to harsh criticism. |
 |
The user on the I-Desk3 is at a remote location. She is
a fly on the wall, secretly monitoring the interaction between the Thing
and the avatar of the other user. A menu on this screen gives the following
dancing feedback options; well, badly, not dancing. The menu also has mood
options; happy, sad, manic, mad. |
 |
The user watches the dancing avatar, clicks the appropriate dance feedback
button, and alters the Things moods.
The Thing uses this information to pick from its library of 'action
+ soundbites' and in turn give feedback to the dancer.
The application ends when the Thing abruptly decides that the user is
not trying to please it and runs off growling.
|
The piece we showed at SIGGRAPH was a stage in the process of building
a Thing with "full" intelligence, capable of tracking and interpreting
the user's actions itself. The show acted as a testbed for the Thing and
provided information on the audience reaction, what worked and what didn't.