ARToolKit | Mailing List Archive |
![]() |
From: | yohan baillot <baillot@a ...............> | Received: | Feb 24, 2003 |
To | ARToolKit Mailing List <artoolkit@h ..................> | ||
Subject: | Segmentation control | ||
Dear list, when experimenting with ARToolkit, I noticed that the markers are not always well detected, or at least it is pretty hard to get the augmentation to appear. I however noted that it works usually better at demos I have seen at ISAR and such. I can attribute this to two problems: -camera calibration problem (bad distortion parameters...), I am going to work on that and see if it becomes better. -segmentation problem. Is there a way to see the segmented image live from ARToolkit live window? (some keys or flag exists may be?) What is usually the attribute to tune? Brightness or contrast? Is there some theory/documentation on that? thanks Yohan Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html |
From: | yohan baillot <baillot@a ...............> | Received: | Feb 24, 2003 |
To | ARToolKit Mailing List <artoolkit@h ..................> | ||
Subject: | Segmentation control | ||
Dear list, when experimenting with ARToolkit, I noticed that the markers are not always well detected, or at least it is pretty hard to get the augmentation to appear. I however noted that it works usually better at demos I have seen at ISAR and such. I can attribute this to two problems: -camera calibration problem (bad distortion parameters...), I am going to work on that and see if it becomes better. -segmentation problem. Is there a way to see the segmented image live from ARToolkit live window? (some keys or flag exists may be?) What is usually the attribute to tune? Brightness or contrast? Is there some theory/documentation on that? thanks Yohan Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html |
From: | Yohan Baillot <baillot@a ...............> | Received: | Feb 25, 2003 |
To | Fivos DOGANIS <Fivos.Doganis@i .................> | ||
Subject: | RE: Segmentation control | ||
Dear Fivos, yes, that was an idea I had but I am not completly sure on the detail to implement it robustly. We could have a procedure where a "threshold calibration pattern" would be held in front of the camera at a distance and attitude such that the projection of its shape on the CCD is constrained (known). A simple optimization search of the thresholding could be done given that the segmentation in this condition is known. The problem here seems to be to have a procedure that is simple enough to put together so that the marker can actually be put exactly where it should. I was thinking of may be having a guiding contour made in Opengl and superimposed on the camera image. The user would have to align the pattern contour with the Opengl contour lines, and since human vision is good at segmentation it will be assumed to be correct. Once the marker is placed in the frame, the automatic search of a the threshold that match the Opengl contour is done, and voila! Does anybody have a better idea or comments about doing this? thanks Yohan On Tue, 25 Feb 2003, Fivos DOGANIS wrote: > Hi Yohan, > > In order to see the thresholded image, you must set the "arDebug" flag to > true (have a look at the sample examples that come with ARToolKit). > > Unfortunately, I have the impression that the threshold is quite brutal > (simple color to binary conversion with constant threshold value). Lighting > conditions are therefore very important, as changing them can even modify > the size of the detected marker and thus, the one of the model! > > I guess adaptive thresholding would be great to improve the segmentation. > > Hope this helps. > > Best regards, > > Fivos > > PS: Tell me if you find better results after calibration! > PPS: Again, I apologize if this question has been answered before... > > -----Message d'origine----- > De : owner-artoolkit@h .................. > [mailto:owner-artoolkit@h ..................]De la part de yohan baillot > Envoye : lundi 24 fevrier 2003 21:39 > A : ARToolKit Mailing List > Objet : Segmentation control > > > Dear list, > > when experimenting with ARToolkit, I noticed that the markers are not > always well detected, or at least it is pretty hard to get the augmentation > to appear. I however noted that it works usually better at demos I have > seen at ISAR and such. > > I can attribute this to two problems: > -camera calibration problem (bad distortion parameters...), I am going to > work on that and see if it becomes better. > -segmentation problem. Is there a way to see the segmented image live from > ARToolkit live window? (some keys or flag exists may be?) What is usually > the attribute to tune? Brightness or contrast? Is there some > theory/documentation on that? > > thanks > > Yohan > > > > > Yohan BAILLOT > > Virtual Reality Laboratory, > Advanced Information Technology (Code 5580), > Naval Research Laboratory, > 4555 Overlook Avenue SW, > Washington, DC 20375-5337 > > Email : baillot@a ............... > Work : (202) 404 7801 > Home : (202) 518 3960 > Cell : (703) 732 5679 > Fax : (202) 767 1122 > Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html > > > > > _______________________________________________________________________ Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html _______________________________________________________________________ |
From: | Yohan Baillot <baillot@a ...............> | Received: | Feb 25, 2003 |
To | Fivos DOGANIS <Fivos.Doganis@i .................> | ||
Subject: | RE: Segmentation control | ||
Dear Fivos, yes, that was an idea I had but I am not completly sure on the detail to implement it robustly. We could have a procedure where a "threshold calibration pattern" would be held in front of the camera at a distance and attitude such that the projection of its shape on the CCD is constrained (known). A simple optimization search of the thresholding could be done given that the segmentation in this condition is known. The problem here seems to be to have a procedure that is simple enough to put together so that the marker can actually be put exactly where it should. I was thinking of may be having a guiding contour made in Opengl and superimposed on the camera image. The user would have to align the pattern contour with the Opengl contour lines, and since human vision is good at segmentation it will be assumed to be correct. Once the marker is placed in the frame, the automatic search of a the threshold that match the Opengl contour is done, and voila! Does anybody have a better idea or comments about doing this? thanks Yohan On Tue, 25 Feb 2003, Fivos DOGANIS wrote: > Hi Yohan, > > In order to see the thresholded image, you must set the "arDebug" flag to > true (have a look at the sample examples that come with ARToolKit). > > Unfortunately, I have the impression that the threshold is quite brutal > (simple color to binary conversion with constant threshold value). Lighting > conditions are therefore very important, as changing them can even modify > the size of the detected marker and thus, the one of the model! > > I guess adaptive thresholding would be great to improve the segmentation. > > Hope this helps. > > Best regards, > > Fivos > > PS: Tell me if you find better results after calibration! > PPS: Again, I apologize if this question has been answered before... > > -----Message d'origine----- > De : owner-artoolkit@h .................. > [mailto:owner-artoolkit@h ..................]De la part de yohan baillot > Envoye : lundi 24 fevrier 2003 21:39 > A : ARToolKit Mailing List > Objet : Segmentation control > > > Dear list, > > when experimenting with ARToolkit, I noticed that the markers are not > always well detected, or at least it is pretty hard to get the augmentation > to appear. I however noted that it works usually better at demos I have > seen at ISAR and such. > > I can attribute this to two problems: > -camera calibration problem (bad distortion parameters...), I am going to > work on that and see if it becomes better. > -segmentation problem. Is there a way to see the segmented image live from > ARToolkit live window? (some keys or flag exists may be?) What is usually > the attribute to tune? Brightness or contrast? Is there some > theory/documentation on that? > > thanks > > Yohan > > > > > Yohan BAILLOT > > Virtual Reality Laboratory, > Advanced Information Technology (Code 5580), > Naval Research Laboratory, > 4555 Overlook Avenue SW, > Washington, DC 20375-5337 > > Email : baillot@a ............... > Work : (202) 404 7801 > Home : (202) 518 3960 > Cell : (703) 732 5679 > Fax : (202) 767 1122 > Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html > > > > > _______________________________________________________________________ Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html _______________________________________________________________________ |
From: | "Fivos DOGANIS" <Fivos.Doganis@i .................> | Received: | Feb 25, 2003 |
To | "yohan baillot" <baillot@a ...............>, "ARToolKit Mailing List" <artoolkit@h ..................> | ||
Subject: | RE: Segmentation control | ||
Hi Yohan, In order to see the thresholded image, you must set the "arDebug" flag to true (have a look at the sample examples that come with ARToolKit). Unfortunately, I have the impression that the threshold is quite brutal (simple color to binary conversion with constant threshold value). Lighting conditions are therefore very important, as changing them can even modify the size of the detected marker and thus, the one of the model! I guess adaptive thresholding would be great to improve the segmentation. Hope this helps. Best regards, Fivos PS: Tell me if you find better results after calibration! PPS: Again, I apologize if this question has been answered before... -----Message d'origine----- De : owner-artoolkit@h .................. [mailto:owner-artoolkit@h ..................]De la part de yohan baillot Envoye : lundi 24 fevrier 2003 21:39 A : ARToolKit Mailing List Objet : Segmentation control Dear list, when experimenting with ARToolkit, I noticed that the markers are not always well detected, or at least it is pretty hard to get the augmentation to appear. I however noted that it works usually better at demos I have seen at ISAR and such. I can attribute this to two problems: -camera calibration problem (bad distortion parameters...), I am going to work on that and see if it becomes better. -segmentation problem. Is there a way to see the segmented image live from ARToolkit live window? (some keys or flag exists may be?) What is usually the attribute to tune? Brightness or contrast? Is there some theory/documentation on that? thanks Yohan Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html |
From: | "Fivos DOGANIS" <Fivos.Doganis@i .................> | Received: | Feb 25, 2003 |
To | "yohan baillot" <baillot@a ...............>, "ARToolKit Mailing List" <artoolkit@h ..................> | ||
Subject: | RE: Segmentation control | ||
Hi Yohan, In order to see the thresholded image, you must set the "arDebug" flag to true (have a look at the sample examples that come with ARToolKit). Unfortunately, I have the impression that the threshold is quite brutal (simple color to binary conversion with constant threshold value). Lighting conditions are therefore very important, as changing them can even modify the size of the detected marker and thus, the one of the model! I guess adaptive thresholding would be great to improve the segmentation. Hope this helps. Best regards, Fivos PS: Tell me if you find better results after calibration! PPS: Again, I apologize if this question has been answered before... -----Message d'origine----- De : owner-artoolkit@h .................. [mailto:owner-artoolkit@h ..................]De la part de yohan baillot Envoye : lundi 24 fevrier 2003 21:39 A : ARToolKit Mailing List Objet : Segmentation control Dear list, when experimenting with ARToolkit, I noticed that the markers are not always well detected, or at least it is pretty hard to get the augmentation to appear. I however noted that it works usually better at demos I have seen at ISAR and such. I can attribute this to two problems: -camera calibration problem (bad distortion parameters...), I am going to work on that and see if it becomes better. -segmentation problem. Is there a way to see the segmented image live from ARToolkit live window? (some keys or flag exists may be?) What is usually the attribute to tune? Brightness or contrast? Is there some theory/documentation on that? thanks Yohan Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html |
From: | Jeremy Goslin <jeremy@j ...............> | Received: | Feb 25, 2003 |
To | Yohan Baillot <baillot@a ...............> | ||
Subject: | Re: Segmentation control | ||
> > >yes, that was an idea I had but I am not completly sure on the detail to >implement it robustly. We could have a procedure where a "threshold >calibration pattern" would be held in front of the camera at a distance >and attitude such that the projection of its shape on the CCD is >constrained (known). A simple optimization search of the thresholding >could be done given that the segmentation in this condition is known. The >problem here seems to be to have a procedure that is simple enough to put >together so that the marker can actually be put exactly where it should. I >was thinking of may be having a guiding contour made in Opengl and >superimposed on the camera image. The user would have to align the pattern >contour with the Opengl contour lines, and since human vision is good at >segmentation it will be assumed to be correct. Once the marker is placed >in the frame, the automatic search of a the threshold that match the >Opengl contour is done, and voila! > >Does anybody have a better idea or comments about doing this? > > > Most cameras have a pretty reasonable automatic brightness control, I have found that changing the threshold +/- 20 from the default value (100) covers nearly all lighting eventualities. With this in mind a user holds a recognisable fiducial (one that is normally used in the application) in front of the camera in a 'calibration phase'. For each frame you run arDetectMarker on all integers between top and bottom threshold. This will tell you the range of theshold values where markers can be detected, and the threshold resulting in the best 'confidence factor'. It is a bit processor intensive, not very elegant, but is very simple to implement and use, as you just aim the camera at any fiducial in your work space. Similarly, for variable lighting conditions, every 'n' frames you can run arDetectMarker 3 times on a frame, once using your standard threshold, then one with a slightly higher theshold, and lastly one with one slightly lower. The theshold resulting in the best marker detection is then used as the new theshold value for the next n frames. Just a thought.. -- ______________________________________________________________________ Dr.Jeremy Goslin Geneva Interaction Lab, Université de Genève. http://www.jeremygoslin.com |
From: | Jeremy Goslin <jeremy@j ...............> | Received: | Feb 25, 2003 |
To | Yohan Baillot <baillot@a ...............> | ||
Subject: | Re: Segmentation control | ||
> > >yes, that was an idea I had but I am not completly sure on the detail to >implement it robustly. We could have a procedure where a "threshold >calibration pattern" would be held in front of the camera at a distance >and attitude such that the projection of its shape on the CCD is >constrained (known). A simple optimization search of the thresholding >could be done given that the segmentation in this condition is known. The >problem here seems to be to have a procedure that is simple enough to put >together so that the marker can actually be put exactly where it should. I >was thinking of may be having a guiding contour made in Opengl and >superimposed on the camera image. The user would have to align the pattern >contour with the Opengl contour lines, and since human vision is good at >segmentation it will be assumed to be correct. Once the marker is placed >in the frame, the automatic search of a the threshold that match the >Opengl contour is done, and voila! > >Does anybody have a better idea or comments about doing this? > > > Most cameras have a pretty reasonable automatic brightness control, I have found that changing the threshold +/- 20 from the default value (100) covers nearly all lighting eventualities. With this in mind a user holds a recognisable fiducial (one that is normally used in the application) in front of the camera in a 'calibration phase'. For each frame you run arDetectMarker on all integers between top and bottom threshold. This will tell you the range of theshold values where markers can be detected, and the threshold resulting in the best 'confidence factor'. It is a bit processor intensive, not very elegant, but is very simple to implement and use, as you just aim the camera at any fiducial in your work space. Similarly, for variable lighting conditions, every 'n' frames you can run arDetectMarker 3 times on a frame, once using your standard threshold, then one with a slightly higher theshold, and lastly one with one slightly lower. The theshold resulting in the best marker detection is then used as the new theshold value for the next n frames. Just a thought.. -- ______________________________________________________________________ Dr.Jeremy Goslin Geneva Interaction Lab, Université de Genève. http://www.jeremygoslin.com |
From: | Jeremy Goslin <jeremy@j ...............> | Received: | Feb 25, 2003 |
To | yohan baillot <baillot@a ...............>, artoolkit@h .................. | ||
Subject: | Re: Segmentation control | ||
> The problem I have with using only the best confidence value is that I > am not sure it is directly connected to the geometric correctness of > the segmentation. I ran a test to check the idea, and you are right, CF does not significantly change with threshold. > > I am not convince that finding a good match with a marker in the > database (that what a high confidence value means, right?) means that > the geometry of the segmentation is correct, the segmented pattern > (and contour square) may have a bigger shape after thresholding due to > the threshold being too low or too high. In this case the detected > depth of the marker will be wrong but it can be a good match with some > of the marker in the database. The results, below, show changes in marker detection CF and distance from camera from the lowest threshold where the correct marker was detected, to the highest theshold. As you can see, CF does not significantly change, and distance from camera changes by about 20%, with the marker appearing closer to the camera as the threshold goes up. Threshold: 90, CF: 0.765618 Distance: 73.942877 Threshold: 95, CF: 0.782236 Distance: 71.597550 Threshold: 100, CF: 0.844098 Distance: 70.964354 Threshold: 105, CF: 0.844098 Distance: 70.973431 Threshold: 110, CF: 0.844098 Distance: 70.265297 Threshold: 115, CF: 0.844098 Distance: 69.681854 Threshold: 120, CF: 0.844098 Distance: 69.618633 Threshold: 125, CF: 0.844098 Distance: 69.491036 Threshold: 130, CF: 0.844098 Distance: 69.486698 Threshold: 135, CF: 0.844098 Distance: 69.079434 Threshold: 140, CF: 0.844098 Distance: 68.797052 Threshold: 145, CF: 0.844098 Distance: 68.660140 Threshold: 150, CF: 0.844098 Distance: 68.615897 Threshold: 155, CF: 0.844098 Distance: 68.568439 Threshold: 160, CF: 0.844098 Distance: 68.293229 Threshold: 165, CF: 0.844098 Distance: 68.374806 Threshold: 170, CF: 0.844098 Distance: 68.217330 Threshold: 175, CF: 0.844098 Distance: 68.068054 Threshold: 180, CF: 0.844098 Distance: 67.800693 Threshold: 185, CF: 0.844098 Distance: 67.751245 Threshold: 190, CF: 0.844098 Distance: 67.845617 Threshold: 195, CF: 0.844098 Distance: 67.771963 Threshold: 200, CF: 0.844098 Distance: 67.566129 Threshold: 205, CF: 0.844098 Distance: 67.516236 Threshold: 210, CF: 0.844098 Distance: 67.419375 Threshold: 215, CF: 0.844098 Distance: 67.030279 Threshold: 220, CF: 0.844098 Distance: 66.997432 Threshold: 225, CF: 0.844098 Distance: 66.640809 Threshold: 230, CF: 0.844098 Distance: 66.803966 Threshold: 235, CF: 0.844098 Distance: 66.326162 Threshold: 240, CF: 0.844098 Distance: 66.140024 Threshold: 245, CF: 0.844098 Distance: 66.274459 Threshold: 250, CF: 0.844098 Distance: 65.292249 Threshold: 255, CF: 0.844098 Distance: 65.292249 Threshold: 260, CF: 0.844098 Distance: 65.292249 -- ______________________________________________________________________ Dr.Jeremy Goslin Geneva Interaction Lab, Université de Genève. http://www.jeremygoslin.com |
From: | Jeremy Goslin <jeremy@j ...............> | Received: | Feb 25, 2003 |
To | yohan baillot <baillot@a ...............>, artoolkit@h .................. | ||
Subject: | Re: Segmentation control | ||
> The problem I have with using only the best confidence value is that I > am not sure it is directly connected to the geometric correctness of > the segmentation. I ran a test to check the idea, and you are right, CF does not significantly change with threshold. > > I am not convince that finding a good match with a marker in the > database (that what a high confidence value means, right?) means that > the geometry of the segmentation is correct, the segmented pattern > (and contour square) may have a bigger shape after thresholding due to > the threshold being too low or too high. In this case the detected > depth of the marker will be wrong but it can be a good match with some > of the marker in the database. The results, below, show changes in marker detection CF and distance from camera from the lowest threshold where the correct marker was detected, to the highest theshold. As you can see, CF does not significantly change, and distance from camera changes by about 20%, with the marker appearing closer to the camera as the threshold goes up. Threshold: 90, CF: 0.765618 Distance: 73.942877 Threshold: 95, CF: 0.782236 Distance: 71.597550 Threshold: 100, CF: 0.844098 Distance: 70.964354 Threshold: 105, CF: 0.844098 Distance: 70.973431 Threshold: 110, CF: 0.844098 Distance: 70.265297 Threshold: 115, CF: 0.844098 Distance: 69.681854 Threshold: 120, CF: 0.844098 Distance: 69.618633 Threshold: 125, CF: 0.844098 Distance: 69.491036 Threshold: 130, CF: 0.844098 Distance: 69.486698 Threshold: 135, CF: 0.844098 Distance: 69.079434 Threshold: 140, CF: 0.844098 Distance: 68.797052 Threshold: 145, CF: 0.844098 Distance: 68.660140 Threshold: 150, CF: 0.844098 Distance: 68.615897 Threshold: 155, CF: 0.844098 Distance: 68.568439 Threshold: 160, CF: 0.844098 Distance: 68.293229 Threshold: 165, CF: 0.844098 Distance: 68.374806 Threshold: 170, CF: 0.844098 Distance: 68.217330 Threshold: 175, CF: 0.844098 Distance: 68.068054 Threshold: 180, CF: 0.844098 Distance: 67.800693 Threshold: 185, CF: 0.844098 Distance: 67.751245 Threshold: 190, CF: 0.844098 Distance: 67.845617 Threshold: 195, CF: 0.844098 Distance: 67.771963 Threshold: 200, CF: 0.844098 Distance: 67.566129 Threshold: 205, CF: 0.844098 Distance: 67.516236 Threshold: 210, CF: 0.844098 Distance: 67.419375 Threshold: 215, CF: 0.844098 Distance: 67.030279 Threshold: 220, CF: 0.844098 Distance: 66.997432 Threshold: 225, CF: 0.844098 Distance: 66.640809 Threshold: 230, CF: 0.844098 Distance: 66.803966 Threshold: 235, CF: 0.844098 Distance: 66.326162 Threshold: 240, CF: 0.844098 Distance: 66.140024 Threshold: 245, CF: 0.844098 Distance: 66.274459 Threshold: 250, CF: 0.844098 Distance: 65.292249 Threshold: 255, CF: 0.844098 Distance: 65.292249 Threshold: 260, CF: 0.844098 Distance: 65.292249 -- ______________________________________________________________________ Dr.Jeremy Goslin Geneva Interaction Lab, Université de Genève. http://www.jeremygoslin.com |
From: | Hirokazu Kato <kato@s ........................> | Received: | Feb 26, 2003 |
To | Jeremy Goslin <jeremy@j ...............> | ||
Subject: | Re: Segmentation control | ||
Hi, I think you used arDetectMarker() function call to get this result. But this function uses tracking history so that the marker is not lost. In order to see this kind of information, you may use arDetectMarkerLite() function. This function is almost same as arDetectMarker() function. But it does not use tracking history. >The results, below, show changes in marker detection CF and distance >from camera from the lowest threshold where the correct marker was >detected, to the highest theshold. As you can see, CF does not >significantly change, and distance from camera changes by about 20%, >with the marker appearing closer to the camera as the threshold goes up. > >Threshold: 90, CF: 0.765618 Distance: 73.942877 >Threshold: 95, CF: 0.782236 Distance: 71.597550 -- ------------------------------------------------------------------ Hirokazu Kato Faculty of Information Sciences Hiroshima City University Phone: +81-82-830-1705 Email: kato@s ........................ Fax: +81-82-830-1435 URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ |
From: | Brendan <brendan@c ............> | Received: | Feb 26, 2003 |
To | artoolkit@h .................. | ||
Subject: | RE: Segmentation control | ||
> Is there a way around this problem? I was thinking of using multiple > patterns and use the centre of two aligned patterns in order to determine > the size of the object, instead of merely using the edge of the detected > pattern. I also considered this yesterday. as long as the markers are in a plane perpendicular to the view direction, then you could adjust the depth based on the computed distance between the models (assuming you have a fixed, known distance between them). However, if the depth to each marker were different, the the amount of edge erosion or dialation would be different. What I mean is a single pixels-worth of error due to thresholding is much more significant to distant markers than to very close markers. besides, at this point we will have essentially defined a "mutli" fiducial, and could be getting poses from arGetMutliTrans instead. I ran the same thresholding test last night with a board covered in a multi-feducial. i can post the data later, but i had about a 3-4% varition in depth with the camera about one meter away, and with about 30 markers in view. cheers, brendan |
From: | "Fivos DOGANIS" <Fivos.Doganis@i .................> | Received: | Feb 26, 2003 |
To | "Hirokazu Kato" <kato@s ........................>, "Jeremy Goslin" <jeremy@j ...............> | ||
Subject: | RE: Segmentation control | ||
Hello Hirokazu, I have tried replacing arDetectMarker() with arDetectMarkerLite(), and resutls are indeed better, although, there is more jitter, due to the fact that tracking history is not used. Nevertheless, when I change the threshold value in the image, the size of my 3D model still changes accordingly. Is there a way around this problem? I was thinking of using multiple patterns and use the centre of two aligned patterns in order to determine the size of the object, instead of merely using the edge of the detected pattern. Also, is there a way to use arDetectMarker() without thresholding the image? I want to try different thresholding methods and have ARTK detect markers on the resulting images WITHOUT wasting time rethresholding at the end. Best Regards, Fivos PS: Thanks for your work on the ARToolKit, which is making AR as simple as it can be! -----Message d'origine----- De : owner-artoolkit@h .................. [mailto:owner-artoolkit@h ..................]De la part de Hirokazu Kato Envoye : mardi 25 fevrier 2003 23:04 A : Jeremy Goslin Cc : artoolkit@h .................. Objet : Re: Segmentation control Hi, I think you used arDetectMarker() function call to get this result. But this function uses tracking history so that the marker is not lost. In order to see this kind of information, you may use arDetectMarkerLite() function. This function is almost same as arDetectMarker() function. But it does not use tracking history. >The results, below, show changes in marker detection CF and distance >from camera from the lowest threshold where the correct marker was >detected, to the highest theshold. As you can see, CF does not >significantly change, and distance from camera changes by about 20%, >with the marker appearing closer to the camera as the threshold goes up. > >Threshold: 90, CF: 0.765618 Distance: 73.942877 >Threshold: 95, CF: 0.782236 Distance: 71.597550 -- ------------------------------------------------------------------ Hirokazu Kato Faculty of Information Sciences Hiroshima City University Phone: +81-82-830-1705 Email: kato@s ........................ Fax: +81-82-830-1435 URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ |
From: | "Fivos DOGANIS" <Fivos.Doganis@i .................> | Received: | Feb 26, 2003 |
To | "Hirokazu Kato" <kato@s ........................>, "Jeremy Goslin" <jeremy@j ...............> | ||
Subject: | RE: Segmentation control | ||
Hello Hirokazu, I have tried replacing arDetectMarker() with arDetectMarkerLite(), and resutls are indeed better, although, there is more jitter, due to the fact that tracking history is not used. Nevertheless, when I change the threshold value in the image, the size of my 3D model still changes accordingly. Is there a way around this problem? I was thinking of using multiple patterns and use the centre of two aligned patterns in order to determine the size of the object, instead of merely using the edge of the detected pattern. Also, is there a way to use arDetectMarker() without thresholding the image? I want to try different thresholding methods and have ARTK detect markers on the resulting images WITHOUT wasting time rethresholding at the end. Best Regards, Fivos PS: Thanks for your work on the ARToolKit, which is making AR as simple as it can be! -----Message d'origine----- De : owner-artoolkit@h .................. [mailto:owner-artoolkit@h ..................]De la part de Hirokazu Kato Envoye : mardi 25 fevrier 2003 23:04 A : Jeremy Goslin Cc : artoolkit@h .................. Objet : Re: Segmentation control Hi, I think you used arDetectMarker() function call to get this result. But this function uses tracking history so that the marker is not lost. In order to see this kind of information, you may use arDetectMarkerLite() function. This function is almost same as arDetectMarker() function. But it does not use tracking history. >The results, below, show changes in marker detection CF and distance >from camera from the lowest threshold where the correct marker was >detected, to the highest theshold. As you can see, CF does not >significantly change, and distance from camera changes by about 20%, >with the marker appearing closer to the camera as the threshold goes up. > >Threshold: 90, CF: 0.765618 Distance: 73.942877 >Threshold: 95, CF: 0.782236 Distance: 71.597550 -- ------------------------------------------------------------------ Hirokazu Kato Faculty of Information Sciences Hiroshima City University Phone: +81-82-830-1705 Email: kato@s ........................ Fax: +81-82-830-1435 URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ |
From: | Jeremy Goslin <jeremy@j ...............> | Received: | Feb 26, 2003 |
To | Hirokazu Kato <kato@s ........................> | ||
Subject: | Re: Segmentation control | ||
> > >I think you used arDetectMarker() function call to get this result. >But this function uses tracking history so that the marker is not lost. > >In order to see this kind of information, >you may use arDetectMarkerLite() function. This function is almost >same as arDetectMarker() function. But it does not use tracking >history. > Ah, thanks for that, I thought it was a little strange that CF remained totally fixed! Here are the test results using arDetectMarkerLite(), there is a deviation of CF now, along with distance. Strangely CF is higher at the margins of useable threshold. Threshold: 85, CF: 0.749978 Distance: 62.625205 Threshold: 90, CF: 0.732250 Distance: 62.158865 Threshold: 95, CF: 0.718681 Distance: 61.816686 Threshold: 100, CF: 0.733612 Distance: 61.749524 Threshold: 105, CF: 0.728767 Distance: 61.530133 Threshold: 110, CF: 0.728767 Distance: 61.369575 Threshold: 115, CF: 0.741673 Distance: 61.273120 Threshold: 120, CF: 0.741673 Distance: 61.157198 Threshold: 125, CF: 0.741673 Distance: 61.010204 Threshold: 130, CF: 0.718015 Distance: 60.922112 Threshold: 135, CF: 0.732258 Distance: 60.912415 Threshold: 140, CF: 0.732258 Distance: 60.991861 Threshold: 145, CF: 0.687898 Distance: 60.991861 Threshold: 150, CF: 0.687898 Distance: 60.991861 Threshold: 155, CF: 0.687898 Distance: 60.991861 Threshold: 160, CF: 0.687898 Distance: 60.991861 Threshold: 165, CF: 0.687898 Distance: 60.991861 Threshold: 170, CF: 0.691990 Distance: 60.991861 Threshold: 175, CF: 0.691990 Distance: 60.991861 Threshold: 180, CF: 0.691990 Distance: 60.991861 Threshold: 185, CF: 0.691990 Distance: 60.991861 Threshold: 190, CF: 0.586475 Distance: 60.991861 Threshold: 195, CF: 0.586475 Distance: 60.991861 Threshold: 200, CF: 0.586475 Distance: 60.991861 Threshold: 205, CF: 0.592318 Distance: 60.991861 Threshold: 210, CF: 0.577740 Distance: 60.991861 Threshold: 215, CF: 0.681428 Distance: 60.991861 Threshold: 220, CF: 0.681428 Distance: 60.991861 Threshold: 225, CF: 0.770372 Distance: 59.426782 Threshold: 230, CF: 0.770372 Distance: 59.181882 Threshold: 235, CF: 0.711673 Distance: 59.097409 Threshold: 240, CF: 0.726029 Distance: 58.484770 -- ______________________________________________________________________ Dr.Jeremy Goslin Geneva Interaction Lab, Université de Genève. http://www.jeremygoslin.com |
From: | Jeremy Goslin <jeremy@j ...............> | Received: | Feb 26, 2003 |
To | Hirokazu Kato <kato@s ........................> | ||
Subject: | Re: Segmentation control | ||
> > >I think you used arDetectMarker() function call to get this result. >But this function uses tracking history so that the marker is not lost. > >In order to see this kind of information, >you may use arDetectMarkerLite() function. This function is almost >same as arDetectMarker() function. But it does not use tracking >history. > Ah, thanks for that, I thought it was a little strange that CF remained totally fixed! Here are the test results using arDetectMarkerLite(), there is a deviation of CF now, along with distance. Strangely CF is higher at the margins of useable threshold. Threshold: 85, CF: 0.749978 Distance: 62.625205 Threshold: 90, CF: 0.732250 Distance: 62.158865 Threshold: 95, CF: 0.718681 Distance: 61.816686 Threshold: 100, CF: 0.733612 Distance: 61.749524 Threshold: 105, CF: 0.728767 Distance: 61.530133 Threshold: 110, CF: 0.728767 Distance: 61.369575 Threshold: 115, CF: 0.741673 Distance: 61.273120 Threshold: 120, CF: 0.741673 Distance: 61.157198 Threshold: 125, CF: 0.741673 Distance: 61.010204 Threshold: 130, CF: 0.718015 Distance: 60.922112 Threshold: 135, CF: 0.732258 Distance: 60.912415 Threshold: 140, CF: 0.732258 Distance: 60.991861 Threshold: 145, CF: 0.687898 Distance: 60.991861 Threshold: 150, CF: 0.687898 Distance: 60.991861 Threshold: 155, CF: 0.687898 Distance: 60.991861 Threshold: 160, CF: 0.687898 Distance: 60.991861 Threshold: 165, CF: 0.687898 Distance: 60.991861 Threshold: 170, CF: 0.691990 Distance: 60.991861 Threshold: 175, CF: 0.691990 Distance: 60.991861 Threshold: 180, CF: 0.691990 Distance: 60.991861 Threshold: 185, CF: 0.691990 Distance: 60.991861 Threshold: 190, CF: 0.586475 Distance: 60.991861 Threshold: 195, CF: 0.586475 Distance: 60.991861 Threshold: 200, CF: 0.586475 Distance: 60.991861 Threshold: 205, CF: 0.592318 Distance: 60.991861 Threshold: 210, CF: 0.577740 Distance: 60.991861 Threshold: 215, CF: 0.681428 Distance: 60.991861 Threshold: 220, CF: 0.681428 Distance: 60.991861 Threshold: 225, CF: 0.770372 Distance: 59.426782 Threshold: 230, CF: 0.770372 Distance: 59.181882 Threshold: 235, CF: 0.711673 Distance: 59.097409 Threshold: 240, CF: 0.726029 Distance: 58.484770 -- ______________________________________________________________________ Dr.Jeremy Goslin Geneva Interaction Lab, Université de Genève. http://www.jeremygoslin.com |
From: | Yohan Baillot <baillot@a ...............> | Received: | Feb 26, 2003 |
To | Hirokazu Kato <kato@s ........................> | ||
Subject: | Re: Segmentation control | ||
Dear Dr Kato, do you think yourself that a high confidence factor guarrantee that the segmentation is correct? thanks Yohan On Wed, 26 Feb 2003, Hirokazu Kato wrote: > Hi, > > I think you used arDetectMarker() function call to get this result. > But this function uses tracking history so that the marker is not lost. > > In order to see this kind of information, > you may use arDetectMarkerLite() function. This function is almost > same as arDetectMarker() function. But it does not use tracking > history. > > >The results, below, show changes in marker detection CF and distance > >from camera from the lowest threshold where the correct marker was > >detected, to the highest theshold. As you can see, CF does not > >significantly change, and distance from camera changes by about 20%, > >with the marker appearing closer to the camera as the threshold goes up. > > > >Threshold: 90, CF: 0.765618 Distance: 73.942877 > >Threshold: 95, CF: 0.782236 Distance: 71.597550 > -- > ------------------------------------------------------------------ > Hirokazu Kato > Faculty of Information Sciences > Hiroshima City University Phone: +81-82-830-1705 > Email: kato@s ........................ Fax: +81-82-830-1435 > URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ > _______________________________________________________________________ Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html _______________________________________________________________________ |
From: | Yohan Baillot <baillot@a ...............> | Received: | Feb 26, 2003 |
To | Hirokazu Kato <kato@s ........................> | ||
Subject: | Re: Segmentation control | ||
Dear Dr Kato, do you think yourself that a high confidence factor guarrantee that the segmentation is correct? thanks Yohan On Wed, 26 Feb 2003, Hirokazu Kato wrote: > Hi, > > I think you used arDetectMarker() function call to get this result. > But this function uses tracking history so that the marker is not lost. > > In order to see this kind of information, > you may use arDetectMarkerLite() function. This function is almost > same as arDetectMarker() function. But it does not use tracking > history. > > >The results, below, show changes in marker detection CF and distance > >from camera from the lowest threshold where the correct marker was > >detected, to the highest theshold. As you can see, CF does not > >significantly change, and distance from camera changes by about 20%, > >with the marker appearing closer to the camera as the threshold goes up. > > > >Threshold: 90, CF: 0.765618 Distance: 73.942877 > >Threshold: 95, CF: 0.782236 Distance: 71.597550 > -- > ------------------------------------------------------------------ > Hirokazu Kato > Faculty of Information Sciences > Hiroshima City University Phone: +81-82-830-1705 > Email: kato@s ........................ Fax: +81-82-830-1435 > URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ > _______________________________________________________________________ Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html _______________________________________________________________________ |
From: | Yohan Baillot <baillot@a ...............> | Received: | Feb 26, 2003 |
To | Jeremy Goslin <jeremy@j ...............> | ||
Subject: | Re: Segmentation control | ||
Thanks for this test Jeremy. However this is very strange, you would expect that the CF has a maximum somewhere in the middle. Can someone explain this? thanks Yohan On Wed, 26 Feb 2003, Jeremy Goslin wrote: > > > > > >I think you used arDetectMarker() function call to get this result. > >But this function uses tracking history so that the marker is not lost. > > > >In order to see this kind of information, > >you may use arDetectMarkerLite() function. This function is almost > >same as arDetectMarker() function. But it does not use tracking > >history. > > > > Ah, thanks for that, I thought it was a little strange that CF remained > totally fixed! > > Here are the test results using arDetectMarkerLite(), there is a > deviation of CF now, along with distance. Strangely CF is higher at the > margins of useable threshold. > > > Threshold: 85, CF: 0.749978 Distance: 62.625205 > Threshold: 90, CF: 0.732250 Distance: 62.158865 > Threshold: 95, CF: 0.718681 Distance: 61.816686 > Threshold: 100, CF: 0.733612 Distance: 61.749524 > Threshold: 105, CF: 0.728767 Distance: 61.530133 > Threshold: 110, CF: 0.728767 Distance: 61.369575 > Threshold: 115, CF: 0.741673 Distance: 61.273120 > Threshold: 120, CF: 0.741673 Distance: 61.157198 > Threshold: 125, CF: 0.741673 Distance: 61.010204 > Threshold: 130, CF: 0.718015 Distance: 60.922112 > Threshold: 135, CF: 0.732258 Distance: 60.912415 > Threshold: 140, CF: 0.732258 Distance: 60.991861 > Threshold: 145, CF: 0.687898 Distance: 60.991861 > Threshold: 150, CF: 0.687898 Distance: 60.991861 > Threshold: 155, CF: 0.687898 Distance: 60.991861 > Threshold: 160, CF: 0.687898 Distance: 60.991861 > Threshold: 165, CF: 0.687898 Distance: 60.991861 > Threshold: 170, CF: 0.691990 Distance: 60.991861 > Threshold: 175, CF: 0.691990 Distance: 60.991861 > Threshold: 180, CF: 0.691990 Distance: 60.991861 > Threshold: 185, CF: 0.691990 Distance: 60.991861 > Threshold: 190, CF: 0.586475 Distance: 60.991861 > Threshold: 195, CF: 0.586475 Distance: 60.991861 > Threshold: 200, CF: 0.586475 Distance: 60.991861 > Threshold: 205, CF: 0.592318 Distance: 60.991861 > Threshold: 210, CF: 0.577740 Distance: 60.991861 > Threshold: 215, CF: 0.681428 Distance: 60.991861 > Threshold: 220, CF: 0.681428 Distance: 60.991861 > Threshold: 225, CF: 0.770372 Distance: 59.426782 > Threshold: 230, CF: 0.770372 Distance: 59.181882 > Threshold: 235, CF: 0.711673 Distance: 59.097409 > Threshold: 240, CF: 0.726029 Distance: 58.484770 > > > > -- > ______________________________________________________________________ > Dr.Jeremy Goslin > Geneva Interaction Lab, Université de Genève. > http://www.jeremygoslin.com > > > _______________________________________________________________________ Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html _______________________________________________________________________ |
From: | Yohan Baillot <baillot@a ...............> | Received: | Feb 26, 2003 |
To | Jeremy Goslin <jeremy@j ...............> | ||
Subject: | Re: Segmentation control | ||
Thanks for this test Jeremy. However this is very strange, you would expect that the CF has a maximum somewhere in the middle. Can someone explain this? thanks Yohan On Wed, 26 Feb 2003, Jeremy Goslin wrote: > > > > > >I think you used arDetectMarker() function call to get this result. > >But this function uses tracking history so that the marker is not lost. > > > >In order to see this kind of information, > >you may use arDetectMarkerLite() function. This function is almost > >same as arDetectMarker() function. But it does not use tracking > >history. > > > > Ah, thanks for that, I thought it was a little strange that CF remained > totally fixed! > > Here are the test results using arDetectMarkerLite(), there is a > deviation of CF now, along with distance. Strangely CF is higher at the > margins of useable threshold. > > > Threshold: 85, CF: 0.749978 Distance: 62.625205 > Threshold: 90, CF: 0.732250 Distance: 62.158865 > Threshold: 95, CF: 0.718681 Distance: 61.816686 > Threshold: 100, CF: 0.733612 Distance: 61.749524 > Threshold: 105, CF: 0.728767 Distance: 61.530133 > Threshold: 110, CF: 0.728767 Distance: 61.369575 > Threshold: 115, CF: 0.741673 Distance: 61.273120 > Threshold: 120, CF: 0.741673 Distance: 61.157198 > Threshold: 125, CF: 0.741673 Distance: 61.010204 > Threshold: 130, CF: 0.718015 Distance: 60.922112 > Threshold: 135, CF: 0.732258 Distance: 60.912415 > Threshold: 140, CF: 0.732258 Distance: 60.991861 > Threshold: 145, CF: 0.687898 Distance: 60.991861 > Threshold: 150, CF: 0.687898 Distance: 60.991861 > Threshold: 155, CF: 0.687898 Distance: 60.991861 > Threshold: 160, CF: 0.687898 Distance: 60.991861 > Threshold: 165, CF: 0.687898 Distance: 60.991861 > Threshold: 170, CF: 0.691990 Distance: 60.991861 > Threshold: 175, CF: 0.691990 Distance: 60.991861 > Threshold: 180, CF: 0.691990 Distance: 60.991861 > Threshold: 185, CF: 0.691990 Distance: 60.991861 > Threshold: 190, CF: 0.586475 Distance: 60.991861 > Threshold: 195, CF: 0.586475 Distance: 60.991861 > Threshold: 200, CF: 0.586475 Distance: 60.991861 > Threshold: 205, CF: 0.592318 Distance: 60.991861 > Threshold: 210, CF: 0.577740 Distance: 60.991861 > Threshold: 215, CF: 0.681428 Distance: 60.991861 > Threshold: 220, CF: 0.681428 Distance: 60.991861 > Threshold: 225, CF: 0.770372 Distance: 59.426782 > Threshold: 230, CF: 0.770372 Distance: 59.181882 > Threshold: 235, CF: 0.711673 Distance: 59.097409 > Threshold: 240, CF: 0.726029 Distance: 58.484770 > > > > -- > ______________________________________________________________________ > Dr.Jeremy Goslin > Geneva Interaction Lab, Université de Genève. > http://www.jeremygoslin.com > > > _______________________________________________________________________ Yohan BAILLOT Virtual Reality Laboratory, Advanced Information Technology (Code 5580), Naval Research Laboratory, 4555 Overlook Avenue SW, Washington, DC 20375-5337 Email : baillot@a ............... Work : (202) 404 7801 Home : (202) 518 3960 Cell : (703) 732 5679 Fax : (202) 767 1122 Web : http://ait.nrl.navy.mil/vrlab/projects/BARS/BARS.html _______________________________________________________________________ |
From: | Hirokazu Kato <kato@s ........................> | Received: | Feb 27, 2003 |
To | "Fivos DOGANIS" <Fivos.Doganis@i .................> | ||
Subject: | RE: Segmentation control | ||
Hi Fivos, >Is there a way around this problem? I was thinking of using multiple >patterns and use the centre of two aligned patterns in order to determine >the size of the object, instead of merely using the edge of the detected >pattern. We can think about use of other features instead of simple square contour. Your idea would be nice one. Another idea is still use of single square marker, but use of both outside and inside contours. >Also, is there a way to use arDetectMarker() without thresholding the image? ARToolKit has only a very simple thresholding method. >I want to try different thresholding methods and have ARTK detect markers on >the resulting images WITHOUT wasting time rethresholding at the end. That's great. -- ------------------------------------------------------------------ Hirokazu Kato Faculty of Information Sciences Hiroshima City University Phone: +81-82-830-1705 Email: kato@s ........................ Fax: +81-82-830-1435 URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ |
From: | Hirokazu Kato <kato@s ........................> | Received: | Feb 27, 2003 |
To | Yohan Baillot <baillot@a ...............> | ||
Subject: | Re: Segmentation control | ||
>since you have design the system I may as well ask you directly. Do you >think Justin assumption that the confidence factor is related to the >correctness of the segmentation? Basically CF value represents similarity between the pattern in a marker and trained pattern. In order to carry out this measurement, the patter in the marker have to be normalized. This normalization process is based on outside contour of the marker. Therefore the correctness of the segmentation affects CF value indirectly. -- ------------------------------------------------------------------ Hirokazu Kato Faculty of Information Sciences Hiroshima City University Phone: +81-82-830-1705 Email: kato@s ........................ Fax: +81-82-830-1435 URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ |
From: | Hirokazu Kato <kato@s ........................> | Received: | Feb 27, 2003 |
To | Yohan Baillot <baillot@a ...............> | ||
Subject: | Re: Segmentation control | ||
>do you think yourself that a high confidence factor guarrantee that the >segmentation is correct? No, I don't. As mentioned in my previous email, accuracy of the segmentation affects CF value. But high CF value dose not guarantee the accuracy of the segmentation but the accuracy of identification of the marker from several trained markers. -- ------------------------------------------------------------------ Hirokazu Kato Faculty of Information Sciences Hiroshima City University Phone: +81-82-830-1705 Email: kato@s ........................ Fax: +81-82-830-1435 URL: http://www.sys.im.hiroshima-cu.ac.jp/people/kato/ |