Thing Home | Production | Josephine Home |
Sometimes the thing will do a string of actions - ie. emerge from box, do a freedom dance, cavort around user thanking him/her for releasing it.
Sometimes the thing will act, then analyse the user's actions to determine what to do next - ie. It is going to teach the user a dance. It must pause after each bit of the dance is taught to watch the user and see if they are doing the dance. Sometimes it will dance together with the user.
Each action (dance, pause) of the thing is a discreet event, the duration of which can vary. A clock is started at the beginning of each event and the user is checked during each event. In general the user is checked to see if s/he is attentive (ie watching the thing), and to see if s/he is moving, (how fast, away from the thing, towards the thing). Additionally the user may need checking to see if s/he is trying to do a specific step of the dance.
In this teachdance sequence, the thing will hum/sing as it dances. Counting into the dance and establishing a tune and rythm to set the movements to, should make it easier to monitor the user more exactly, since it will be possible to judge when they are starting and finishing (I hope).
The brain's app will have 4 stages:
get information | from storystate, thing's emotional state and user info |
make decision | based on info and on its own state
(eg. where it is in the teachdance sequence.) |
order action | movement of entire thing
movement of thing's body parts sound specific user check |
start event clock | tick tock |
The Storystate
(tgStoryState) |
This is where we are in the story. The story state will keep a timer that tells how long we have been in a particular section of the story. If we need to bring about a climax, the weight of the story state factor on the decision process will increase. |
Emotional State
(tgEmotions) |
This is the thing's internal emotional state. It should in part depend on previous state the thing has been in, in part be subject to random patterns and abrupt change. |
User Info
(tgCheckUser) |
Global User Info.
This event User Info Specific Check |
Make Decision
The app has to decide on a next action that fits the narrative as well as with reference to the emotions of the thing.
It therefore has to have a sense of the kind of actions it is currently
engaged in.
I propose a series of states that will correspond to the various activities:
eg EMERGEFROMBOX, TEACHDANCE, SULKANDHIDE etc etc.
The makeDecision function has to decide which localBodyMove to
pick.
Each localBodyMove will have an appropriate sound attached to
it.
Each localBodyMove will have several variations with different emotional
weights.
So in a chooseLocalAction function we will check the thing's internal state, pick an appropriate action and assign an appropriate emotional level to the action.
***
if (state_ == TEACHDANCE) //moving
between action (showing the dance move), observation and reaction
{
if (substate_ == ACTION)
{
open the danceActionStore
->pick next action (eweight)
substate_ = observe
}
else if (substate_ == OBSERVE
{
open the observeStore->pick
next observation (eweight)
substate = reaction
}
else if (substate_ == REACTION)
{
if(comply)
//comply flag set in getinfo
{
open the complyStore ->pick reaction (eweight )
tell appropriate actionStore to :
throw away action
throw away observation
throw away reaction
action = 1
}
else
{
open the failStore ->pick reaction (eweight )
tell appropriate actionStore to :
throw away reaction
action = 1
//redo current action and observation
}
}
}
if (state_ = SULKANDHIDE)
open the emergeActionStore ->pick next action (eweight)
The make decision function also has to pick an appropriate Global Move for the thing.
The chooseGlobalMove will also use the states :
****
if (state_ == TEACHDANCE)
if(substate_ == ACTION)
choose stay in place, not effected by where user goes
if(substate_ == OBSERVE ||
substate == REACTION
choose moving relative to user - chasing user if necessary, catching
up, staying within range etc
if (state_ == SULKANDHIDE)
choose look for nearest rock, go to it, hide under it!
Will be a child of the brain.
It is a list of actions that the brain can choose from.
The eweight is sent to the actionStore which stores actions both in
a sequence and as variations of the action with different emotional weights
. So the actionStore has to access the appropriate action by its place
in a sequence and its eweight or in the case of the reactions, simply by
the eweight.
actionStore | |||||||
sequence | variations of different emotional weight | ||||||
emerge | emerge
danceforjoy suckup |
emerge
danceforjoy suckup |
emerge
danceforjoy suckup |
emerge
danceforjoy suckup |
|||
dance | dance1
dance2 dance3 dance4 |
dance1
dance2 dance3 dance4 |
dance1
dance2 dance3 dance4 |
dance1
dance2 dance3 dance4 |
|||
observe | observe1
observe2 etc |
observe1
observe2 etc |
observe1
observe2 etc |
observe1
observe2 etc |
|||
react | react | react | react | react |
action(name = danceofjoy, file = danceofjoy1, ...)
action(name = danceofjoy, file = danceofjoy2, ...)
}
Stores the entire action, comprising:
if (tag == duration)
fill up duration_
etc
All these class members have to be accessible by the brain and info
from them passed on where appropriate
ie. the movements are fed to the localBodyDCS's - headMovesList
to headDCS etc etc
ie. sound to sound ????
tgGlobalBodyDCS
will consist of a set of Global Moves for the body to do including:
emerging from box
running after user
staying close to user
jumping
backing away from user
swooping in on user
running to hide under nearest rock
etc
It must be aware of the user's position and head orientation
It needs to be able to avoid the rocks
The brain will tell it what action to take.
pretty much like the old keyframeDCS or avatarDCS
needs to take in a list of positions, orientations and scales and move
through them
in duration
when finished needs to interpolate from last position of old list
to first position of newlist
using the transitions time
needs to know when the thing is talking
flashes a head object that lights up the head object and which has to therefore be loaded under the localBodyDCS along with the head
a good extension would be for this head flasher to change colors with the things emotions
keeps infromation about the user and interprets it!
Global Info
I need the thing to flip from emotion to emotion and I'm thinking an emotion wheel may work.
Actions are given an eweight between 0 and 360.
At 0 the thing goes from manic/high to angry
At 90 softens from manic to happy.
I'm thinking that it would be goods to visualize this for testing purposes - so that at any time we can see with a pointer where on the wheel it is.
What should move it?
It's effected by say its
last x moods (event by event)
It's effected by the user's
attitude
too much user compliance makes it angry
too little user attention makes it angry
an impatient user makes it manic
a tentative user makes it blue (depressed)
Not really sure how complicated it needs to be to convince us of the
possibility of its mood swings.
I'm not exactly sure if this needs to be a class to itself.
All its going to so is to keep global track of the state the story is in and a timer that tells how long we have been in that state.
It will also have durations for how long the state can last and will move the action onto the next state if things are lagging.
states are:
OnThePlain
InTheShed
UserMeetsThing
TeachDance
PleasureStick
TeachDance2
ThingTantrum
RocChase
ThingReleaseUser
TeachDance3
WorldCrack
etc.
I think the user will revisit the techdance state several times and
so I make a note here that we also need to keep track of where the user
has gotten to with the dancing. The Thing will pick up the teaching from
where it left off.
This is to record the motion tracked movement of the thing.
I need to be able to record four elements (head, body, two arms)
Then play those back and record four more (four tail pieces)
I need to work out a naming convention and a filing convention so I end up with
move_head.path
move_body.path
move_rarm.path
etc