click to view contact details click to search using Google SiteSearch click to view directions to the lab
Multimodal Interfaces's papers
Multimodal Interfaces main page

home
where we are
contact us
search
research projects
publications
people
news & events
hitlab nz
hitlab au

 

 

 

Multimodal Interfaces

Project Activities

This project involves the development of software libraries for incorporating multimodal input into human computer interfaces. These libraries combine natural language and artificial intelligence techniques to allow human computer interaction with an intuitive mix of voice, gesture, speech, gaze and body motion. Interface designers will be able to use this software for both high and low level understanding of multimodal input and generation of the appropriate response.

avatar Intelligent Conversational Avatar

The purpose of this project is to Develop an Expert System and Natural Language Parsing module to parse emotive expressions from textual input.

glovegrasp GloveGRASP

GloveGRASP is a set of C++ class libraries that allow developers to add gesture recognition to their SGI applications.

hmrs HMRS

HMRS is a project to develop a generic software package for hand motion recognition using hidden markov models, with which user interface designers will be able to build a multimodal input system.


Contacts

Mark Billinghurst <mark.billinghurst at hitlabnz.org>