Hello,
For our mobile outdoor AR project we have a need for a general tracking solution which does not require us to put a tracking system and fiducials in the environment. Looking at the current trend, vision tracking based on the segmentation of natural landmark seems to be the way to go.
There are several groups with which we collaborate to try to achieve this goal. One general problem is to interface our AR software with their software to test the quality of the tracking. Once this is established, the conditions in which the algorithms have been tested may differ from the conditions in which the system is to be used. In this case the algorithms might have to be tuned which require more time and trials. Finally, the algorithms will more probably need to have access to rough estimate of the position and orientation, which are part of our AR system.
To solve these problems we are investigating putting together an interface to our AR system that would allow to work with third party researchers easily. There are many people out there working on vision tracking that may have very good systems but do not have access to real data or do not want to spend time developing visualisation systems.
We would like to hear if some of you would be interested in such an interface so that they can use it to test their algorithms. The initial ideas we have are the following:
-We will use our system to record video, GPS and inertial data, time stamped in some data file
-The interface will have some functions to query video frames, inertial and GPS data with time stamp to process them in non-real time
-The interface will have a "set head attitude" function that will set the attitude of the head determined from the computation of the vision algorithm. A "get first head attitude" will be provided to initialize the system to a well known initially calibrated attitude corresponding to the first video frame.
-The interface will have a display function which will display the current video frame as well as the graphic overlay of the real object looked at. The graphical overlay attitude will be controlled by the "set head attitude" function
-Researchers that need access to the model description of the scene the camera is looking at could have access to the list of points and edges of the object that we will have previously surveyed.
-This framework will provide us with a way to verify that the tracking accuracy is "good enough" for what we want to do by just verifying that the "real" (video frame) matches the "virtual" (graphical overlay) quite accurately. We could also compare different tracking algorithms to find the best match.
-This framework will facilitate the communication with other researchers working on the vision problem but not having access to video, GPS, inertial data and model data.
We realize that a better "formula" is needed to measure the accuracy of a tracking system but this will facilitate our task in the meantime.
Thanks for your feedback/suggestions
Yohan
==^================================================================
EASY UNSUBSCRIBE click here: http://topica.com/u/?a84Ao5.a9zIQj
Or send an email To: arforum-unsubscribe@t .........
This email was sent to: webmaster@e ............
T O P I C A -- Register now to manage your mail!
http://www.topica.com/partner/tag02/register
==^================================================================
|