Intelligent Conversational Avatar

Description

The purpose of this project is to Develop an Expert System and Natural Language Parsing module to parse emotive expressions from textual input. We will then use the information to set the graphical appearance of avatars in order to reduce the need to switch between messages.

Our proposed approach is as follows: The Expert System parses the text to get the emotions the user desires to portray, taking in account cues present in text like: Types of words used, contextual information, length of phrases typed, use of emotions :-) or :-(

Block Diagram of System
View larger version
The emotional model has a number of requirements. There is an Agent entity causing the emotional display of the current person who is typing his text input. There is a Recipient to whom the agent is communicating the emotional display. There is an Expert System to characterize the agent and his emotional states. Facts about emotional effects must be specified in terms of simple data structures or unambiguous NL statements. e.g. If one is irritated, then further irritation can make angry.

To implement the emotional model we use a concise unambiguous set of emotional vcategories with a certain degree associated, and a transition function for perturbing the emotional states. For this function, we use the behavior of an exponential model (e-j). To obtain continuous changes we apply Fuzzy Logic. The Expert System itself is implemented in CLIPS, a language developed by NASA. We employ face modeled 3D, composed of polygons, which can be rendered with a skin like surface material. C, C++ languages and OpenGL are used, and the Facial Display is realized by local deformations of the polygons.


Contact:

Jimena Olveres, jimena@hitl.washington.edu

$HITLab: avatar.html,v 1.2 2001/01/23 17:42:45 perseant Exp $