Project Lifelike is a three year project that investigates, develops and evaluates lifelike natural computer interfaces as portals to intelligent programs. The goal is to provide a natural interface that supports realistic spoken dialogue and non-verbal queues and is capable of learning to maintain its knowledge as current and accurate.
Research objectives focus around the development of an avatar-based interface with which the user can interact through spoken natural language combined with natural expression. Speaker independent continuous speech input and gestural information will be provided to the system in real-time, and a context-based dialogue system provides a suitable response in the form of a spoken reply by the avatar, complete with realistic inflection, facial and gestural expressions.
Our first application of this research is to create an interactive computer generated representations of individuals to preserve their knowledge.

