> does anyone know if with artoolkit i can create an
> avatar, an animate representation of a remote user, and make a video
> conference (the avatar should be controlled by the remote user: so it
> should talk when the remote user talks, and stuff like that...)?
If I understand that correctly you want to create an avatar on a local
machine that is controlled by a remote user. I don't know how complex
the behaviour of that avatar should be (maybe you could define precisely
what you mean by "stuff like that"), but I developed an avatar in AR
that can talk to a local user (implementing synthetic speech with lip
synchronisation, and some communication behaviour like eye, head, body,
arm and hand movements). Speech is generated from remote text input,
movements are aligned to the speech on semantic rules (which you can
define and tune yourself) and support the meaning conveyed verbally.
This believable avatar is based on work pioneered by Cassell et al.
(formerly at the MIT Media Lab) and especially the BEAT System developed
by Hannes Vilhjįlmsson. You can find my paper and additional materials at
You are welcomed to have a look into that paper (especially section
7.2.2 onwards might be interesting for you) and get back to me with
whatever question you might have. Sourcecode for the project is
available, but there are certain prerequisites regarding software
licenses (see section 7.3).