I want to use one logitech camera to implement stereo display for the
virtual overlay object. I am using openGL quadBuffer to achieve this. My
idea is to get the camera parameter from the matrix loaded to
glModelView, retrieve the camera position, orientation and focus length
from it. Then based on this camera, get the two new camera with a
tunable eye separation distance. But the result is not right, there is
only background color when the pattern is visible.
I suspect there is something wrong with my current algorithm. Here is
In object coordinate:
camera position = inverse of the model view matrix times (0, 0, 1, 0).
view direction = inverse of the model view matrix times (0,0,-1, 0)
up vector = inverse of the model view matrix times (0,1,0,1)
Thus, set the glLookAt.
To set the two frustrum for the two cameras,
if Cx 0 Sx 0 0
0 Cy Sy 0 0
0 0 0 1 0
is the camera model view matrix I got, then
focal lengh is Cx, hfov is Sx/Cx.
Any comments will be appreciated.