I am finishing up my master thesis in Augmented Reality. Part of my
thesis will have a feature that is similar to ARToolkit, which is,
looking at a marker, and figuring out the transform from the marker to
I'm not using ARToolkit, but instead I am using OpenCV to do marker recognition.
I have question regarding how ARToolkit can get transorm matrix just
by looking at the marker.
Currenlty, given two cameras and a square marker, I can get the
distance from the cameras to the marker, and also the orientation of
the camera. (using my own technique)
But these position and orientation information are relative to the marker.
For example: I can found out that the cameras is 10 feet away from the
marker. But that means the cameras can be anywhere in a circle, 10
feet away from the marker, thus I still don't know my "absolut"
position of the camera in world coordinate.
Seems like ARToolkit can figure the absolut position and orientation
of the camera, using the arGetTransMat() function.
I'm new to computer vision, and AR. I wonder if you guys can point me
into learning more on how ARToolkit obtain the absolut position and
rotation of the camera.
*I have read the "Marker Tracking and HMD Calibration" paper by Dr.
Billinghurst, and Dr. Kato. But I didn't quite understand the part
when it's explaining figure 6, trying to get the rotation component of
transformation matrix Tcm*
Is there other paper that explains how to estimate transformation
matrix by looking at marker?
Or Is there a book that can help me in this particular area?