An Exploration of Techniques to Improve Relative Distance
Judgments within an Exocentric Display
The overall goal of this experiment was to address the following question: How can computer graphics features such as image rotation, stereoscopic viewing and head-motion tracking contribute to one's ability to make rapid and accurate relative distance judgments within an exocentric view of a computer-generated 3D perspective display? As suggested in an earlier chapter, the understanding of 3D space can benefit from each of these computer graphics features.
One would expect image rotation to be an important component for the understanding of 3D space when it is projected onto a 2D display. This feature allows the user to view the scene from various axes. The benefit of stereoscopic viewing and head-motion tracking is less clear. If the monoscopic cues provided by the display are ambiguous, stereoscopic viewing may prove to be beneficial for making accurate relative distance judgments. Head-motion tracking may also be of benefit if subjects use the changes in viewing perspective and the motion parallax depth cues provided by this feature. However, one may expect a significant interaction for these features when used in conjunction with rotation, especially in the case of particularly difficult distance judgments. Again, the strength of stereoscopic viewing is that it allows us to make relative depth judgments. Its incorporation within a spatial display may allow for more accurate and rapid relative distance judgments, especially when those judgments are more challenging.
Head-motion tracking can provide the depth cue of motion parallax, along with access to different viewing perspectives. While the changes in viewing perspective will be smaller than those provided by image rotation, head-motion tracking, when used in conjunction with rotation, may allow one to make more rapid judgments. Also, motion parallax, due to head translation, may assist a user in making a more accurate relative distance judgment, for there will be times when rotation alone does not provide an obvious answer. For this situation, rotating the image so that the motion parallax cue can be advantageously used may help the user to discriminate smaller distance differences.
The base viewing display incorporated known spatial display recommendations, such as those for geometric field of view and eyepoint elevation. The re-creation of many monoscopic depth cues (i.e., occlusion, relative size, perspective projection) are readily available on many computer graphics systems; thus, they were included throughout the experimental conditions. The purpose of this research was to see what added benefit rotation, head-motion tracking and stereoscopic viewing can offer to the performance of a relative distance spatial task to the display provided.
The task chosen for this experiment was similar to one used by Bemis, Leeds and Winer (1988), in which subjects first detected threats and then selected the closest interceptor within a command-and-control display. For the current experiment, subjects were shown four differently colored cubes (yellow, blue, green and lavender) hovering over a patch of terrain. They were then asked to indicate, via a keystroke, "which colored cube is closest to the white cube" within the scene (Figure 3.1). A light (less saturated) version of color was used to decrease the chance of chromostereopsis influencing the distance judgments
Figure 3.1 Visual Scene for Task
Subjects were asked to make their judgments as quickly and accurately as possible. Subjects were also told to assume that all cubes were the same size. This was done so that subjects could also assume that the cubes' relative sizes were commensurate with their distances from each other. As a means of evaluating the relative contribution of the specific computer graphics features, subjects' performance was measured by both their accuracy and speed when making the relative distance judgment needed to complete the task. In addition, after each viewing condition, subjects were asked for a rating of their confidence in their answers. This was done in order to assess if there was a correlation between how well subjects thought they did and how well they actually performed with each of the features being tested.
This task involved a static model, in so far as, the cubes' positions did not change relative to each other. The results may thus generalize to performance using dynamic models which change at slow rates. The static views can be considered to be similar to snapshots of a dynamic environment at specific points in time (Yeh and Silverstein, 1992).
3.2.2 Apparatus and Computing Methods
This experiment was run on a Silicon Graphics Onyx (RE2) computer, with a 19 inch diagonal, stereo-ready display with 1280 x 1024 resolution and 24 bit color depth. Stereographic viewing was provided by Crystal Eyes LCD shutter glasses, and head-motion tracking data was provided by a Polhemus Fastrak system running at 120 Hz. The emitter for the Fastrack was suspended, via a plastic pipe, from the laboratory ceiling and was positioned approximately six inches away from the transmitter. The transmitter was attached to a baseball cap which the subject wore, with the bill of the cap facing towards the back of the head. Division Inc.'s dVISE software was adapted to the experimental procedure. A number of compiled software routines, written in C, were used to perform stimulus placement, scene manipulation and data recording.
Basic 3D computer graphics processing, as mentioned in the literature review, requires mapping a 3D visual scene onto a 2D picture plane with the center of projection corresponding to a vanishing point from which all projectors emanate (Foley et al., 1993). The process involves two major steps for transformation. Objects are defined in a world coordinate system which consists of arbitrary units. In the first step, the world coordinate system is projected onto a view plane in the eye coordinate system. For this experiment, the view or picture plane was defined as an approximately square window with the world coordinates ranging from 50 to -50 inches in the x and z directions and 70 inches in the y (vertical) direction. In the second step, the view plane is mapped onto the display screen based on the size of the viewport. Thus, the arbitrary world coordinate system is now mapped onto the physical coordinate system of the display screen.
The horizontal field of view for the display was calculated as approximately 33 degrees. Subjects were positioned at a viewing distance of 57 cm away from the screen. The width of the display screen was 33 cm. The following formula yielded the horizontal field of view:
FOVh = 2 Atan (Width of Monitor Display Area / (2* Viewing Distance to Screen))
Likewise the vertical field of view, with the display having a 26 cm display height, was calculated as FOVv = 2 Atan ( 26 / (2* 57)) = 26 degrees.
The initial horizontal field of view for the (exocentric) world image was 26 degrees. This implies that the world image occupied 78% of the horizontal display area. Again this was the initial horizontal field of view of the world image, regardless of the experimental condition. The independent variables of image rotation and head-motion tracking could alter actual image field of view values depending how the subject used these techniques.
126.96.36.199 Stereoscopic Viewing
Crystal Eyes LCD shutter glasses provided stereoscopic viewing. Left and right eye images are rapidly and alternately displayed on the screen. The Crystal Eyes shuttering lenses are synchronized to the alternating display so that each eye sees only its appropriate perspective view. Each left and right image, or field, is refreshed at a rate of 60 Hz, requiring a monitor refresh rate of 120 Hz. This rapid refresh rate allows for a flickerless display. Crystal Eyes documentation recommends using the lowest parallax value which will still provide an effective depth effect; this helps to reduce viewer discomfort. To meet this recommendation, a disparity value of 1 cm was selected. This value provided an adequate perception of depth with an on average disparity within the display of 3600 arc sec (1 degree) being presented to the subject at the distance of 57 cm.
The stereo projection presented by the dVISE software was based on an average interocular distance of 6.3 cm. Neither the stereo projection parameter, nor the amount of disparity presented to subjects, was adjusted based on their individual interocular distances. However, based on results from a study which manipulated interocular distance to determine the effect on a depth perception task (Rosenberg, 1993), it was felt that by not adjusting the stereo display parameters to account for each subject's interocular distance, task performance would not be effected.
Since the use of the LCD shutter glasses significantly reduces the perceived luminance of the viewed image (McKenna and Zeltzer, 1992), subjects wore the glasses across all experimental conditions. For non-stereo conditions, the disparity value was reduced to the minimum allowed by the software in order to effectively produce zero disparity between the left and right images. This configuration presented a bi-ocular view of the world to the subject.
188.8.131.52 Image Rotation
The perspective projection of an object can be changed by two different methods. One method is to leave the object stationary and then select a desired viewpoint and projection plane; this is called the viewpoint transformation method. The other method is to fix the viewpoint and then transform the object. This method is called the object transformation method. When image rotation was available in this experiment, the object transformation method was used . Subjects rotated the image within the display by using keys numbered 1 through 10 on the number pad on the right side of the keyboard. Figure 3.2 shows the rotation assignments given to each of the keys.
Figure 3.2: Rotation-Key Assignments and World Axis
Key number 5 (in the center) was a special key which returned the image to the original view shown at the beginning of the trial. Keys 4 and 6 rotated the image around the vertical Y axis. Key number 4 performed a negative or clock-wise rotation about the axis and key number 6 performed a positive or counter-clockwise rotation around the Y axis. Keys numbered 8 and 2 performed rotations around the horizontal axis (known as X within the software) causing the image to rotate back to front. Keys numbered 7 and 9 performed positive and negative rotations about the depth axis (known as Z within the software scheme). Keys numbered 1 and 3 performed equivalently to 7 and 9. Each time the user pressed a number key, with the exception of key 5, the world rotated one degree around the appropriate axis. The user could also hold down a single key in order to see a smooth rotation of the world image in the given direction.
184.108.40.206 Head-Motion Tracking
In contrast to rotation, the viewpoint transformation method was chosen as the means of changing the perspective projection of the world when head-motion tracking was used. As suggested by Akka (1993), x, y and z head position data were used to create the perspective transformations. The perspective transformations were accomplished within the dVISE software by using the position data to appropriately alter the body position within the virtual environment; this would cause the display viewpoint to change appropriately.
220.127.116.11 Geometric Field of View
The geometric field of view for the dVISE software is defined through a file of projection
parameters (Figure 3.3). The parameters of interest include settings for Screen Width and Screen Distance.
Screen Width = size (width) of the picture plane in inches.
Screen Distance = the orthogonal distance from the camera eyepoint to the picture plane in inches.
Figure 3.3: Experiment's Horizontal Geometric Field of View
The horizontal geometric field of view can be calculated via the formula:
GFOVh = 2* (Atan ((Screen Width * .5)/ Screen Distance))
Given the provided normalized settings for the dVISE software, the horizontal geometric field of view is calculated to be approximately 45 degrees.
18.104.22.168 Eyepoint Elevation Angle
Consistent with the literature discussed earlier, a 30 degree eyepoint elevation angle was chosen as the base viewing orientation. This viewing orientation was chosen to provide the best compromise to effectively convey both depth and altitude information. To create a 30 degree eyepoint elevation angle within the dVISE software, a bodyPosition parameter is specified in the .maz file. The bodyPosition parameter consists of x, y and z axes specifications. In addition to the camera eyepoint position, the location of a target reference object must be specified. For this calculation, a target was placed in the center of the 3D space used for stimulus placement. Once the absolute world position of the target object was obtained from the relative world object position, the eyepoint elevation angle could be calculated by the previously described formula (Figure 2.3):
EPEA = Atan (Dy / Dx)
where Dx is now the difference between the eyepoint and target object along the viewing dimension, and Dy is the vertical difference between the eyepoint and target object.
The values of 97 and 130 were chosen for the respective X and Y values of the body position. Dy = (97 - 23), or 74, where 23 is the vertical y value for the centered reference object. Dx is the value of 130 since the participant is actually looking at the world along the X axis. These values were used in order to obtain the 30 eyepoint elevation angle:
EPEA = Atan (74/130) or 29.6 degrees.
3.2.3 Target Positioning
For each treatment combination, subjects made the same 18 relative distance judgments by specifying which colored cube is closest to the white cube. The order of the 18 position configurations was randomized for each treatment combination.
The 18 object position configurations were obtained in the following manner. The 3D space above the world plane was divided into 100 equal volume segments. Eighty position configurations were then created when four colored cubes and one white cube were each randomly placed across these segments. The non-white cubes were colored as light blue, light green, light yellow and light lavender. As noted earlier, a light (less saturated) version of color was used to decrease the chance of chromostereopsis influencing the distance judgments.
Three pilot subjects then viewed these 80 position configurations (without rotation, stereo or head tracking), and made judgments as to which colored cube was closest to the white cube. From the results of these judgments, the final 18 position configurations were chosen, with 20% of the final 18 coming from judgments made where 0 or 1 pilot subject answered incorrectly, 40% where 2 pilot subjects answered incorrectly and 40% from judgments made where all 3 pilot subjects answered incorrectly. The purpose of dividing the final position configurations in this way was to provide stimuli that would provide a range of difficulty in making distance judgments (conceptually similar to a stratified random sample). Again, the goal of this research is to understand which computer graphics components offer the greatest benefit in making these judgments. In order to determine this, we wanted to sample an appropriate range of task difficulty.
In addition, it was assured that the position configurations chosen sampled a wide range of distance judgments. For example, some position configurations involve stimuli which are placed relatively close together while other position configurations involve judgments where the cubes were placed relatively far apart within the world scene (Figure 3.1). Also, in order to avoid systematic errors due to perceptual distortions within particular areas of the display, the initial viewing positions of stimuli was varied across quadrants. In addition, in order to prevent subjects from making distance judgments based on cues in the world plane (rather than relative target object location), a non-symmetric terrain was used. The world ground consisted of rolling hills, bodies of water and a house.
3.3 Experimental Design and Procedure
The first experiment was a fully crossed 2 x 2 x 2 within-subjects (repeated measures) design, where the main effects and interactions were tested by their interaction with subjects. The treatments were stereo (on and off), rotation (on and off) and head-motion tracking (on and off).
Sixteen subjects were run, 9 males and 7 females, ranging in ages from 21 to 39. Some subjects were graduate and undergraduate students at the University of Washington. These subjects were recruited from several different courses offered at the University. All subjects were asked to wear corrective lenses if needed for near-distance viewing.
The subjects were shown four differently colored cubes (yellow, blue, green and lavender) hovering over a patch of terrain. Their task was to indicate, via a keystroke, "which colored cube is closest to the white cube" within the scene (see Figure 3.1). The order of the 8 treatment combinations was counterbalanced across subjects using a digram-balanced Latin Square design (Wagenaar, 1969) to control for any asymmetrical transfer of practice from one condition to another. Each treatment combination contained the same 18 object position configurations determined by the pilot study. The order of the 18 trials was randomized for each treatment combination so that subjects would not memorize response orders. Also, the colors assigned to the colored cubes was rotated in order to again discourage recall of previously seen configurations and to guard against responses biased by chromostereopsis.
With regard to the experimental procedure, each subject was given a written overview (Appendix A) of the experiment and then asked to read and sign a Subject Consent Form. All subjects were given the option of terminating their participation at any time, in compliance with Human Subjects Review Committee guidelines.
Near-point visual acuity and far-point stereoacuity were measured using a Keystone Ophthalmic Telebinocular device. All subjects had 20/30, or higher, visual acuity. All subjects could detect at least 100 arc-sec of disparity, except for one subject whose detection threshold was 500 arc-sec of disparity.
Following the vision tests, subjects were seated 57 cm away from a computer display screen, and then fitted with the tracking device and shutter glasses. The use of tracking and rotation was explained to the subjects, and they were allowed to practice while making eight relative distance judgments using these aids. This number of practice trials was felt to provide adequate training in the use of the viewing features. During the practice trials, subjects were given feedback on the accuracy of their judgments.
At the beginning of each treatment combination, subjects were told whether they would have rotation or head tracking available to them. The trials for each treatment combination were run as a block. Subject (keystroke) judgment responses were automatically recorded for each trial. Each subject also gave a verbal of judgment confidence at the end of each experimental condition. Subjects rated their confidence in the accuracy of their responses in the latest condition on a scale from 1 to 10, where 1 indicated that they had no confidence in their judgments and 10 indicated that they were extremely confident in their judgments. After completing all treatment combinations, subjects were thanked for their time and asked to fill out a written post-experiment questionnaire (Appendix B).
For each treatment combination, the following measures were determined for each subject:
1) The subject's "Accuracy" as the percentage of correct judgments made within the treatment combination.
2) The subject's "Time" as the average time (in seconds) that the subject took to make a judgment within the condition.
3) "Confidence" as rated on a scale of 1 to 10 at the end of each treatment combination.
3.4.1 Dependent Variables of Accuracy, Time and Confidence
Figure 3.4 presents the means and standard deviations for Accuracy for the eight treatment conditions.
Figure 3.4: Accuracy within each Treatment Condition
Table 3.1 presents the means and standard deviations for Accuracy associated with the independent variables of this experiment.
Table 3.1: Means and Standard Deviations for Accuracy in Experiment 1
Mean Std. Dev.
Stereo -on 61.1 17.39
-off 60.2 15.75
Rotation - on 73.4 9.88
- off 47.92 11.07
Head_track- on 61.97 15.66
- off 59.7 17.4
An ANOVA on Accuracy showed only a main effect for rotation (F(1,15) = 160.72, p < .001). The data had homogenous variances. On average, subjects scored 73 percent correct on relative distance judgments when image rotation was available, and 48 percent correct when image rotation was not available (Figure 3.5). There were no other significant main effects or interactions.
Figure 3.5: Accuracy as a Function of Stereo, Image Rotation and Head Tracking
In addition to Accuracy, the average time that a subject took to make a judgment within a condition was also recorded. Table 3.2 presents the means and standard deviations for Time (in seconds) associated with the independent variables of this experiment.
Table 3.2: Means and Standard Deviations for Time in Experiment 1
Mean Std. Dev.
Stereo -on 22.4 18.0
-off 21.4 15.2
Rotation - on 31.1 17.1
- off 12.9 9.6
Head_track- on 23.2 16.8
- off 20.7 16.3
Again, an ANOVA performed on Time showed only a significant main effect for rotation (F(1,15) = 49.17, p < .001). There were no other significant main effects or interactions. On average subjects took approximately 31 seconds to make a relative distance judgment when image rotation was available, and approximately 13 seconds when image rotation was not available (Figure 3.6).
Figure 3.6: Time (in sec.) as a Function of Stereo, Image Rotation and Head Tracking
From the results above, is it quite clear that image rotation capability results in more accurate judgments of relative distance. In addition, it has been shown that subjects took significantly longer to make a judgment when rotation was available, as compared to when it was not available. While subjects were told to make their judgments as quickly and accurately as possible, there was no overt control over the length of time that a particular subject took to make a judgment. Thus the following question needs to be addressed: Is the main effect of rotation on subject accuracy just reflecting the time taken to make the judgment and not a reflection of rotation as a method?
To answer this question, an Analysis of Covariance (ANCOVA) was performed on Accuracy with Time specified as the covariant. To remove the effect of Time on Accuracy the following was done:
1) The ANCOVA computed a Predicted Accuracy from the given Time value.
2) The ANCOVA then subtracted the Predicted Accuracy from the original Accuracy and obtained a residual value.
3) The residuals, or differences between the Predicted and original Accuracy, were then used as the Accuracy values for an ANOVA.
Thus the ANCOVA was just an ANOVA run on the residuals after predictions of the dependent variable of Accuracy have been made from the covariant of Time. Results from the ANCOVA show only a significant main effect for Rotation (F(1,15) = 37.35 p < .001). There were no other significant main effects or interactions for the experiment. Thus, no conclusions about significance change. These results indicate that the significant effect of rotation is due to the method itself and not solely due to the length of time that one took to make a judgment.
Correlation coefficients were calculated across the independent and dependent variables of the experiment. A significant correlation was noted between subjects' Confidence Rating and the Rotation condition (rs = .43, p < .001). T-tests further revealed a significant increase in subject's confidence about their accuracy when image rotation was allowed (t = 5.27, df = 127, p < .001). In addition, as confirmed by the ANCOVA, Accuracy and Time were found to be correlated (rs = .59, p < .001). A t-test for Accuracy by Gender showed a significant difference between male and female subjects' scores (t = -2.47, df = 125, p < .015). The mean Accuracy for males was 63.7 and the mean Accuracy for females was 56.5. A t-test on Gender and Time also showed a significant difference between the length of time that males and females took to make their judgments (t = -2.48, df = 125, p < .015). The mean Time taken by males was 25.0 seconds per judgment, and the mean Time for females was 17.8 seconds. These results suggest that males, on average, took more time to make their judgments and were also more accurate.
3.4.2 Post-Study Questionnaire and Subject Head Movements
On the post-study questionnaire, subjects were asked to describe the ease of use of the rotation method on a scale from 1 to 7, with 1 signifying "very easy to use" and 7 indicating rotation was "very difficult to use". Subjects responded with a mean score of 3.6 indicating that rotation was just moderately easy to use. When asked about the use of head tracking, subjects indicated that this feature was a little easier to use by giving head tracking a mean score of 3.1 on the same scale. All of the subjects indicated that they had used rotation when it was made available to them, with 88% of these subjects reporting that they had developed a particular strategy in using rotation to make their distance judgments. When asked about head tracking, only 50% of the subjects felt they used the head tracking feature over the course of the experiment. Of those responding that they used head tracking, 75% believed that they had developed a particular strategy.
Subjects' head movements were monitored throughout the experiment at a rate of 5 Hz. Head movement activities were analyzed by noting the number of times a subject's head moved from a baseline position established at the start of each condition. Head movement changes in the X, Y and Z directions were then calculated for each treatment combination. Over the course of the experiment, all subjects were informed that they had head tracking available to them in four different (counterbalanced) conditions. In two of these conditions, rotation was also available, and in the other two it was not. Across the four conditions, approximately 50% of the subjects reduced their use of head tracking over time. Approximately 33% of the subjects did not use head tracking when rotation was available.
The results of this experiment suggest that the additional spatial information provided by a feature such as image rotation is useful for the accurate judgment of relative distances within an exocentric 3D spatial display, in conjunction with the monocular depth cues of relative size, occlusion and linear perspective. Stereoscopic viewing and head-motion tracking did not increase performance, perhaps because they did not provide significant new information within a range of distance judgments required for this exocentric view.
Overall, subjects used image rotation extensively in making their judgments, and they felt significantly more confident in those judgments when rotation was available. As suggested in the initial hypothesis, it was not surprising that rotation provided a significant main effect for the completion of this task. What is interesting, however, is that 1) stereoscopic viewing and head-motion tracking did not increase performance when used in conjunction with rotation, and 2) individuals felt they achieved their accuracy through a wide variety of rotation strategies.
The following are examples of some of the rotation strategies used. Three of the subjects mentioned that they rotated the image so that they could "line up (their) view behind the white cube", and then make judgments from there. This strategy may indicate that these subjects were trying to obtain a more egocentric view of the world. This may allow them to determine how far the cubes were from their own viewing perspective. Other subjects seemed to use the image rotation to gain access to missing axis information. Two subjects reported that they rotated in a single direction, such as around the Y axis, where others (three subjects) commented on rotating about a single axis, but then checking their judgments from a top-down or side perspective in order to confirm their estimation. Two subjects mentioned that they made an initial guess, and then rotated in order to confirm or disconfirm their initial choice.
Two of the subjects explicitly stated that, when rotating, they used the relative movements of the objects to help them in their estimation. This suggests that some subjects were using the motion parallax depth cue when rotating the image. In addition, subjects may have also used the Kinetic Depth Effect (KDE) to help them understand the 3D space. KDE has been shown to allow people to recover 3D form when viewing 2D projections of rotating objects (Wallach and O'Connell, 1953). From the rotating exocentric view of the world, in addition to observing the relative movements of objects (as provided by motion parallax), subjects may also be recovering 3D form information about the world as a whole.
As stated in the experimental hypothesis, it was not surprising that stereo-viewing or head-motion tracking would not provide significant main effects. One reason that a main effect for head tracking was not expected was that the image stimulus was an exocentric view of a world. In order to view the whole 3D space of the world, the center of the world had to be placed at an eyepoint distance of 130 inches from the screen. If the additional 23 inches of length (57 cm) between the subject's eyes and the screen is added to this amount, there is now a distance of 153 inches between the viewer and the world. If the participant moves, for example, 10 inches to the right, this motion will only provide approximately a 4 degree rotation of the world. Even if this value is exaggerated by 100%, an 8 degree change is not enough to allow for an accurate relative distance judgment to be made. Using rotation, subjects made the world rotate more than 8 degrees even for the easier judgments. Had the image been more egocentric (i.e., looking at a single object directly in front of you), the use of head tracking may have proven to be more beneficial.
The fact that there were no significant interactions with rotation was somewhat surprising. One reason why head tracking did not significantly improve judgments made with rotation may be that approximately one-third of the subjects did not even use head tracking when rotation was also available. In fact, several subjects noted that having rotation and head tracking available at the same time proved to be very distracting; these subjects noted that they tried to keep their heads very still while they used rotation. In addition, as mentioned earlier, approximately one-half of the subjects reduced their use of head tracking over time across the four head tracked conditions. Comments from these subjects indicated that they initially tried to use head tracking, especially when available without rotation. However, over time, they felt that it did not give them sufficient additional information. Thus, they decreased their usage of the feature.
Additionally, only two subjects explicitly mentioned moving their head side to side to find object movement differences. This action would allow them to use head tracking to produce the motion parallax depth cue. The fact that so few subjects mentioned using motion parallax in this manner suggests that people may seldom use this technique to judge relative distances, at least in the distance range sampled within this experiment. It is interesting to note, however, that the subject who scored lowest on the binocular disparity evaluation obtained the highest score within any single condition of any subject. In addition, this subject stated that he explicitly moved side to side to gain distance information and also watched relative movement of objects during the rotation conditions. This information suggests that motion can be a salient depth cue when used. Perhaps, in the absence of stereoscopic capabilities, individuals adapt by making better use of other cues.
With regard to stereoscopic viewing, again, the absence of interaction effects with rotation may have to do with the fact that the exocentric view of the world diminished the effectiveness of the stereo viewing due to the apparent distance between the viewer and the world. Also, it may be that stereopsis is better at allowing us to determine which objects are closer to ourselves rather than which objects are closer to each other. Thus, stereo viewing may be more beneficial given an egocentric viewing perspective. Furthermore, the environment used in the experiment was rich in monocular cues such as perspective and occlusion and relative size. Such strong monocular cues diminish the relative impact of stereo viewing. Also, perhaps the world used as a stimulus did not provide enough depth planes for stereo viewing to be effective. The task basically involved the making of depth and distance distinctions across two or three planes created by the cubes. Had the image been more visually complex, such as when closely looking at a complex molecular model, stereo viewing may have proven to be a more important viewing feature.
It is also worth noting that, in order to assess the contribution of the independent variables of this experiment, subjects were asked to make very difficult relative distance judgments. As described in section 3.2.3, the 18 position configurations presented for each treatment combination were extracted from a pilot study, and in 40% of the selected 18 judgments, no pilot subjects selected the correct answer. The level of accuracy reported in this experiment is in agreement with a forced choice, 25% chance model. In addition, the seemingly extended length of time taken by subjects to make a distance decision may again be reflective of the fact that these were difficult judgments, hence, subjects required more time to do so.
With regard to the differences in performance reported across male and female subjects, male's significantly higher accuracy is in accordance with reported higher spatial ability of males in a range of spatial tests (Masters and Sanders, 1993). The reason for the increased time taken by males is not as clearly understood. It may be that across all the conditions, males took more time to explore the different features available to them. Or, since subjects were told to make their judgments as "quickly and accurately as possible", it may be that males had more of a bias for being accurate as opposed to performing the task quickly.