by Keith Hullfish
To test these hypotheses, three environments were constructed. The Real environment consisted of a room in which a life-sized chessboard was placed in the middle of the floor. Four objects were arranged in a pattern on the chessboard. The Imagined environment employed the same room except that subjects had to generate images of the objects. The Virtual environment was a simulation of the Real environment. In each of these environments, subjects memorized a set of unique configurations of these objects while navigating a path. On an unexpected identification of origin test that followed immediately after studying these configurations, it was expected that the qualitative differences between memories from real and imagined environments would lead subjects to distinguish those configurations that were actually experienced in those environments; i.e., subjects would be able to correctly identify origin above chance. Moreover, it was expected that subjects would rate their confidence with regards to correct attributions high (Johnson et al., 1981).
Cognitive effort in the Imagined environment was manipulated by varying the number of external cues specifying the global shape. Subjects either generated images without any cues or were provided with two physical objects.2 Similar to Johnson et al. (1981), it was assumed that the automaticity with which an image is generated depends on the extent to which the response relies on external cues. Thus, it would be easier to imagine and remember the configuration when two physical objects partially specified it. This would limit the evidence of cognitive effort that could be encoded and salient when identifying its source. Thus, if cognitive effort is a cue of an Imagined environment, subjects in this condition would be less likely to correctly identify a configuration's origin. Moreover, if lack of cognitive effort is more characteristic of some other origin (i.e., Real), then they would also be more likely to be confused overall.
To corroborate that cognitive effort was salient in the decision process, and did primarily characterize the memories from Imagined environments, a questionnaire (Appendix A) was designed. The Memory Characteristic Questionnaire (MCQ) assessed the acknowledged qualitative differences between experiences in memory (Johnson et al., 1988; Suengas & Johnson, 1988). These differences could potentially be criteria with which people distinguish among the memories from these environments. It contained twenty-one items derived from various sources (Johnson et al., 1988; Hoffman, Hullfish, & Houston, 1995; Slater, Usoh, & Steed, unpublished). Other items were added to explore other potential differences including: metamemory judgments concerning presence, attention, and the coherence of memories; and the effects of field-of-view. Subjects were also asked to rate the similarity among the environments.
With regards to the final hypothesis, it was expected that memories from Virtual environments did not imitate those from Real environments. The limitations inherent in the virtual interface would leave artifacts which would be captured in memory. Subjects then would be able to correctly identify the memory's true origin (Hoffman, Hullfish, & Houston, 1995). Moreover, if a subject's confidence in misattributing the memory as real varied inversely proportional to the number of artifacts found, then it would be expected that subjects would have relatively little confidence in their confusions with Real origins. The limitations could also generate undue cognitive effort as an artifact of omission or commission which could be found similar to the effort characteristic of Imagined environments.
Figure 5.1: Examples of the types of objects. (a) cube, (b) half-cylinder, (c) 3D "T", (d) 3D triangle.
Worlds were designed to be uniquely memorable using only two features. Each configuration was one of eight distinct, but common, global shapes. As shown in Figure 5.2, the shapes were: big square, trapezoid, "V", curve, line, triangle, diamond, and little square.
Figure 5.2: Examples of global shapes used in the construction of the experimental configurations. (a) big square, (b) trapezoid, (c) "V", (d) curve, (e) line, (f) triangle, (g) diamond, (h) little square.
Each of these types of global shapes had four versions which were distinguished by its general location on the chessboard. For instance, in Figure 5.2(e), the general location of the line could be described as being on the left-most side of the chessboard. The relative positions of objects with respect to color and the orientation of objects were kept constant across all worlds. The type of object was also confounded with the type of global shape.
The worlds from each Worldset were experienced in either the Real, Virtual, or Imagined environments. The worlds from the remaining Worldset served as distracters on the later Virtual Reality Monitoring (VRM) test (i.e., the choice of origin was "New"). Worldsets were partially counterbalanced such that the actual origin for each Worldset was equally likely to be Real, Imagined, Virtual, or New across subjects. The counterbalancing scheme (Figure 5.3) was generated using cyclic permutations of this sequence of origins. Each row represents how Worldsets were assigned to environments for a given subject. For instance, with Combination 1, a subject: saw worlds from Worldset A in the Real environment, imagined worlds from Worldset B in the Imagined environment, saw worlds from Worldset C in the Virtual environment, and saw worlds from Worldset D only on the VRM test.
Note that for Worldsets experienced in the Hybrid version of the Imagined environment, the two objects that were seen in each world were chosen randomly. This procedure was done once for each Worldset and did not change throughout the experiment. Since object position was confounded with color, these choices were adjusted so that all colors were equally likely to be seen in this condition.
Figure 5.3: Scheme used to counterbalance Worldsets across actual origins. This defined four different combinations.
Figure 5.4: Experimental area showing regions I, II, and III. -- = curtains.
The curtains also insured visual privacy which was needed to prevent extraneous cues, outside the experimental area, from being encoded and used as cues to source.
Figure 5.5: (a) The Real environment, (b) The Imagined environment, (c) The Imagined environment, Hybrid version, (d) The Virtual environment.
The goal of the simulation was to achieve as realistic simulation of the actual room as possible. To this end, the exact dimensions of the room, objects, and chessboard were simulated. This geometry was created using 3DStudioª and Macromodelª. The colors of the objects were also matched using CMYK equivalents of the paper that was used in the construction of the real objects. All other colors were matched visually as closely as possible. The ceiling and stone walls were texture mapped with corresponding photographs from the actual room. Lighting was simulated using directional and ambient light. Other details were also added to prevent the simulation from looking artificial. These included a door, electrical outlets, electrical conduits, and switches. The metallic surfaces of the objects (e.g., the conduits) were also simulated. Finally, the subject's height was simulated.
The dViseª software was modified to enable more experimental control and to facilitate preparing Virtual trials. A soft-toggle changed the Virtual world from one configuration to another using a pre-defined order. In between Virtual trials, an electronic mask appeared so as to cover the Virtual scene. The viewpoint was also transported back to the original entry position and reoriented so that subjects always had the same viewpoint upon entering the Virtual environment. This viewpoint corresponded to the one they would have upon entering the Real and Imagined environments.
The experiment was divided into 2 phases: the study phase followed by a test phase. Subjects completed the entire study phase before going on to the test phase.
Figure 5.6: Example of how a subject's sequence of worlds in the study phase was composed. The chart illustrates the number of worlds from each environment for each third of the sequence.
Prior to the experiment, subjects played a virtual reality game called "Shark World."3 to familiarize themselves with the equipment and navigational controls. Subsequently, subjects read an introduction to the experiment (Appendix C) and received 2 practice trials in each type of environment.
The experiment was described as a study of cognitive abilities in Real, Virtual, and Imagined environments. Subjects were shown examples of each type of object. On each trial, they were instructed to memorize the spatial configuration of four objects while navigating a pre-defined path on a life-sized chessboard. They were explicitly told to memorize the global shape of the configuration, and where it was on the chessboard. They did not have to remember anything about the colors of the objects. The instructions also informed the subjects that a test followed the study phase, buts its nature was concealed. However, they were shown a chessboard with objects on it to illustrate what the test looked like. Finally, subjects were informed that the sequence of environments in the experimental trials was random and that they would not know the environment for the next trial until it began.
At the beginning of each experimental trial, subjects entered one environment opposite the room's door. They walked through the world by following a path marked by a green string on the life-sized chessboard. As they passed each object, they pointed to it with one hand (Figures 5.7a, 5.7b, 5.7c). Subjects were instructed to use the same hand throughout all experimental trials. After completing the path, they took one last look then exited from where they entered.
The trials in the Imagined environment were similar except that upon entering a trial, subjects were given the written instructions with which to navigate and imagine the objects. Pointing also helped the experimenter's confirm that the subject had imagined the objects in the correct positions. After completing the path, subjects mentally went through the path again. Upon exiting, they handed in the instructions.
The trials in the Virtual environment were also similar, except they had a few additional procedures. At the start of each trial, the experimenter placed the HMD on subject's head and oriented them to face the Virtual room's door. The experimenter then removed an electronic mask to uncover the Virtual world. During the trial, subjects could use only one button on the 3D joystick which allowed them to navigate forward. Subjects pointed to objects using a simulated, disembodied hand tracked by the position of the 3D joystick (Figure 5.7c). After the subject completed the trial, the electronic mask reappeared. The experiment then helped the subject take off the HMD.
Figure 5.7: Task being performed in the (a) Real, (b) Imagined, and (c) Virtual environments, respectively.
After each experimental trial, subjects returned to their seat in the anteroom. Experimenters then broke down and set up the world for the next trial. Time between trials was managed so that all inter-trial intervals were short and approximately the same length. At the beginning of the next trial, subjects were invited by the experimenter to enter either the Real, Imagined, or Virtual environment.
After completing all experimental trials, subjects played with another Virtual environment, Chemistry World4, for approximately five minutes.
The VRM test consisted of thirty-two test stimuli (Figure 5.8) that represented the thirty-two worlds designed for the study. Subjects were instructed that all twenty-four worlds experienced in the study phase appeared on the test, while eight worlds were new. The order of test stimuli was randomly assigned for each subject. However, this order was adjusted so that no more than two test stimuli from the same origin appeared in a row. For each test stimulus, subjects were asked to identify the environment in which they experienced the configuration that was presented (Real, Virtual, Imagined, or New). They were also asked to rate their confidence in the accuracy of their response (0%, 20%, 40%, 60%, 80%, 100%). Subjects were encouraged to proceed to the next question as soon as they were satisfied with their response. The software did not allow subjects to skip a test stimulus. Nor did it allow subjects to go back to review or change their previous responses.
The Memory Characteristic Questionnaire (MCQ) contained twenty-one, 3-part items (Figure 5.9). Subjects rated their memories of Real, Virtual, and Imagined environments on seven-point scales. The order of items was randomly assigned for each subject, except that all subjects rated the similarity of the environments last. Unlike the VRM test, the software allowed subjects to skip and return to questions. However all questions were required to be answered before they finished the test.
Figure 5.8: Example of a test stimulus and questions from the Virtual Reality Monitoring test.
Figure 5.9: Example of one of the items on the Memory Characteristic Questionnaire.
With regards to rated confidence for a response, it was found that average confidences between subjects varied greatly and confidences between different types of attributions were correlated. Hence, each subject's rated confidence for a response was indexed to their average confidence over the whole test. For those types of attributions that were not made, the indexed confidences was set equal to zero. This assumes that if a subject had any confidence in a attributing a world to a particular origin, they would have actually responded in that manner. This transformation eliminated the effects due to individual differences while highlighting those confidences that were not typical. Table 5.2 presents the mean indexed confidences for responses to each origin.
A miss was defined as misattributing an "old" (i.e., Real, Virtual, or Imagined) world as New. In addition, a miss could have been misidentifying a world as old when in fact it was New on the test. The mean proportion missed for each origin, and the corresponding indexed confidence, are shown in Table 5.3.
There were differences in how subjects recognized old versus new worlds. They were not likely to miss Real and New worlds; frequency was below chance (t(15) = -3.95, p<0.01; t(15) = -4.99, p<0.01; respectively). In contrast, subjects were as likely to miss Virtual and Imagined worlds than to correctly identify them as old (p>0.05). However, upon further analysis of the responses for these old worlds, it seemed that subjects did at least discriminate among old choices of origin. There was a difference among the proportions of responses to these worlds (Virtual: c2(4) = 14.08, p<0.05; Imagined: c2(4) = 57.58, p<0.05).
Subjects were discriminating with respect to their confidence in misses. Generally, subjects were relatively less confident than average when they missed (mean = 0.66, t(63) = -6.90, p<0.01). As shown in Table 5.3, subjects were particularly less confident when they missed an old world (t(15) = -4.49, p<0.01; t(15) = -3.61, p<0.01; t(15) = -3.63, p<0.01 for Real, Virtual, and Imagined worlds respectively).
To examine the effects related to the design of the experiment, miss data was also analyzed using a repeated measures latin square design (Appendix D). The within-subjects factors in this analysis were Worldset and Actual Origin (Real, Virtual, Imagined, New). The between-subjects factors were Number Imagined and Group (1, 2, 3, 4, 5, 6, 7, 8). Group reflected how subjects were assigned counter-balancing combinations as defined in Figure 5.3.
The frequency with which subjects missed worlds did differ significantly across Actual Origin (F(3,27) = 21.29, p<0.01). Subjects missed New worlds more frequently than any other worlds from another origin (Student-Newman-Keuls, MSE(27) = 0.02, p<0.05). There was no difference in frequency between Real, Virtual, and Imagined worlds that were misrecognized as New. The fact that old/new discrimination among actual old origins was comparable has two implications. First, source monitoring among memories of actual old origins may be studied separately. Also, any potential differences in this process need not be qualified by considering differences in the ability to discriminate memories of old worlds from new.
Subjects could not discriminate memories from Virtual environments (t(15) = 0.72, p>0.05). An ANOVA using the design described above (Appendix D), confirmed that subjects correctly identified Virtual worlds less frequently than Real, Imagined, or New worlds (F(3,27) = 14.90, p<0.01; Student-Newman-Keuls, MSE(27) = 0.02, p<0.05). In general, given that the Virtual environment was as likely as any other environment to be experienced in the study phase, subjects did not respond Virtual as frequently as would have been expected (t(15) = -6.57, p<0.01).
This analysis also found three other effects on the measures taken. First, it was confirmed that subjects responded more confidently when correctly identifying Real worlds than any other type of world (F(3,27) = 4.02, p<0.02; Student-Newman-Keuls, MSE(27) = 0.10, p<0.05). Second, as predicted, the manipulation did have a differential effect on a subject's ability to correctly identify the origin (F(3,27) = 4.03, p<0.02). As seen in Figure 5.10, when subjects had to imagine fewer objects, they were less likely to correctly identify Imagined worlds (Student-Newman-Keuls, MSE(27) = 0.02, p<0.05). This accuracy was still above chance (t(15) = 2.81, p<0.05), but they were no more likely to discriminate memories of Imagined worlds than memories of Virtual worlds. Furthermore, in this condition, discrimination of memories Imagined worlds was less likely than that for memories of Real worlds.
Finally, it was found that there was an interaction between Worldset and the Actual Origin (F(9,27) = 2.81,p<0.02). There was a differential effect due to the stimuli on the memorability of Virtual worlds (Student-Newman-Keuls, MSE(27) = 0.02, p<0.05). As shown in Figure 5.11, worlds from Worldset A were correctly identified less frequently when they were experienced in Virtual environments as compared to Real environments. Likewise, worlds from Worldset D were correctly identified as Virtual worlds less frequently than Real, Imagined, or New worlds. This result highlights the importance of considering the counterbalancing scheme as a factor in the statistical design.
Figure 5.10: Mean Proportion Correct Identification of Origin by Actual Origin and Number Imagined.
Figure 5.11: Mean Proportion Correct Identification of Origin by Worldset and Actual Origin.
There tended to be support for the prediction that imagining fewer objects would lead to more confusion among memories of all old origins (F(1,8) = 5.00, p0.06). Subjects tended to be more confused when they had to imagine only two objects in the Imagined environment (mean = 0.21) than when they imagined all four objects (mean = 0.16). There was also a trend that suggested the manipulation had differential effect on the different types of confusions (F(5,40) = 2.04, p<0.10).
However, there was no support for the prediction that there might be confusion between memories from Virtual and Imagined environments. Although there was a significant effect for Type of Confusion for both the proportion confused (F(5,40) = 7.07, p<0.01) and the indexed confidence (F(5,40) = 12.04, p<0.01), just the opposite was found. Subjects misattributed worlds actually experienced in the Virtual environment to the Real environment more frequently than any other type of confusion (Student-Newman-Keuls, MSE(40) = 0.02, p<0.05). Furthermore, subjects had above average confidence in these misattributions (t(15) = 4.37, p<0.01) which was higher than for any other type of confusion (Student-Newman-Keuls, MSE(40) = 0.23, p<0.05). In neither case were these measures symmetrical with the corresponding confusion. However, with regards to the interaction trend stated above, this asymmetry tended to disappear when subjects imagined all four objects, although misattributions of Virtual worlds to the Real environment remained the most frequent confusion (Student-Newman-Keuls, MSE(40) = 0.02, p<0.10).
Another distinction that contradicts the original hypothesis was that memories of Virtual worlds were not likely to be confused with an Imagined origin. Subjects misattributed memories of Virtual worlds more often to the Real environment than to the Imagined environment. With regards to the interaction trend stated above, this difference was more pronounced when subjects imagined all four objects; when they imagined only two, there tended to be no difference. Confidence in the misattributions of Virtual worlds to the Imagined environment was also less than average (t(15) = -6.55, p<0.01).
Finally, there was little confidence when subjects misattributed memories of Real worlds to the Imagined environment(t(15) = -4.53, p<0.01). The indexed confidence for this confusion was less than and asymmetrical with its corresponding confusion.
5.2.2 Memory Characteristic Questionnaire A 2x3 ANOVA was performed on each item. The between-subjects factor was Number Imagined and the within-subjects factor was the origin which subjects were rating. Only those items directly related to the hypotheses developed earlier will be presented. A full listing of results can be found in Appendix E.
As predicted, subjects did discriminate in how they rated memories of experiencing Real, Virtual, and Imagined worlds with regards to cognitive effort (F(2,28) = 46.14, p<0.01). As shown in Figure 5.12, subjects rated their experience in the Imagined environment as the most difficult when they tried to identify the global shape of a configuration (Student-Newman-Keuls, MSE(28) = 0.97, p<0.05). Experiences in the Virtual environment were rated relatively easier while the Real environment were rated the easiest. Moreover, there was an interaction with the manipulation of cognitive effort (F(2,28) = 3.76, p<0.05). As shown in Figure 5.13, the rating for Virtual environments relative to Real environments increased when subjects imagined all four objects. These subjects rated identifying the global shape in the Virtual environments as easy as in the Real environments (Student-Newman-Keuls, MSE(28) = 0.97, p<0.05).
Figure 5.12: Mean Rating By Origin - Identifying the global shape of a configuration was: 1 = Very Difficult; 7 = Very Easy.
With regards to rated similarity, there was no support for similarity between memories of Virtual and Imagined environments. Figure 5.14 presents a multidimensional scaling of the pairwise rating of similarity between memories. Subjects rated Real and Virtual memories the most similar. This rating was significantly greater than the ratings for each of these memories with Imagined memories, which were both quite low (F(2,28) = 7.07, p<0.01; Student-Newman-Keuls, MSE(28) = 2.58, p<0.05).
Figure 5.13: Mean Rating By Number Imagined and Origin - Identifying the global shape of a configuration was: 1 = Very Difficult; 7 = Very Easy.
Figure 5.14: Multiple Dimensional Scaling of Rated Similarity Between Origins: 1 = Very Different; 7 = Very Similar.
The results indicate that the worlds experienced in the study phase were memorable. Real worlds were rarely missed. Although this was not the case for Virtual and Imagined worlds, the discrimination among the other choices of origin suggests that there were cues encoded for these worlds upon which to judge origin. The memorability of old worlds may also be judged in the context of an apparent old/new recognition process. Distracters were rarely missed and were very likely to be correctly identified. Given that New worlds are identifiable because there are no cues encoded with which to recognize them, worlds were probably attributed as old because there were sufficient cues to recognize them as either Real, Virtual, or Imagined. Conversely, old worlds that were missed probably did not contain evidence which was consistent with this criteria. The level of confidence for old misses supports this hypothesis. Given that confidence reflects the amount of information available in memory (Johnson et al., 1993), the relatively low confidence indicates that there was little information that matched the schema responsible for responding either old or new. This also suggests that confidence is a cue for recognition (Johnson et al., 1981).
There was also a deliberate decision process. Subjects were able to discriminate memories of worlds experienced in the Real and Imagined environments. They also were more confident when they correctly identified Real worlds. These decisions are consistent with the reality monitoring paradigm (Johnson & Raye, 1981) in that there are differences in quality between these sources, as originally experienced. Moreover, there are distinct sets of criteria based on these qualitative differences that uniquely characterize each source in memory and serve to identify where the memory actually originated.
It is evident from these results that the cognitive effort required to experience these worlds is one of these qualities and is characteristic of memories from the Imagined environment. It was thought that increasing the number of external cues which specified the global shape would increase the automaticity with which an image is generated and decrease the salience of cognitive effort in the decision process. If cognitive effort is a cue to source, then this manipulation would lead to more confusion about the origin of all memories. The trend for greater confusion when two objects partially specified the global shape is consistent with this hypothesis. Furthermore, if cognitive effort is characteristic of memories of Imagined worlds, then it should be more identifiable when this cue is more salient. Indeed, memories of Imagined worlds were more identifiable when the task was designed to be more difficult; i.e., when subjects imagined all four objects.
However, the manipulation of external cues in the Imagined environment could have just maximized the use of perceptual qualities in discriminating memories from the Imagined environment. Memories of worlds which were imagined could have been more identifiable solely because they had fewer perceptual cues than those acquired in the Real and Virtual environments. However, this premise seems unlikely. Subjects indicated on the MCQ that it was more difficult to identify the global shape in the Imagined environment. Thus, it is likely that this effort was salient in the decision process. In fact, the rating differences among the three origins was sensitive to the manipulation of the number of objects imagined. This indicates that the relative level of cognitive effort was manipulated by the number of external cues and this quality is considered in the decision process. Hence, evidence of cognitive effort appears to be a salient characteristic of memories with an Imagined origin. This conclusion is consistent with the source monitoring framework (Johnson et al., 1993; Johnson et al., 1981; Finke et al., 1988) and extends the use of this criterion to distinguishing between memories of spatially-distributed objects in real, virtual, and imagined three-dimensional environments.
With regards to virtual reality, there was no evidence of artifacts in memory which would distinguish this medium from reality. Subjects could not discriminate memories from Virtual environments, so it is not likely that there are any unique qualities which serve to characterize this origin. Nor is there evidence that cognitive effort exists in the memories from Virtual environments, even though this quality was salient in the decision process. Rather than misattributing memories of Virtual worlds to the Imagined environment, it is more likely that these memories shared qualities with memories from the Real environment. The relatively high frequency and confidence for this type of confusion indicates that the qualities inherent in the memories of the Virtual experience match the criteria set by the qualities associated with actual Real origins. The rated similarity between memories from Real and Virtual environments on the MCQ also bears this out. According the theoretical framework (Johnson et al., 1993), these qualities have not been typically associated with cognitive effort. Finally, given the assumption that any significant artifacts would have engendered relatively little confidence in misattributing these memories to the Real environment, the relatively high confidence for these confusions indicates that any artifacts that did exist were not significant.
Furthermore, subjects seemed to disregard the fact that memories from Virtual environments were as likely as any other origin; it was as if their sense of reality were anchored to the Real versus Imagined dichotomy. The asymmetry of the confusions between memories from the Real and Virtual environments is consistent with this hypothesis. The fact that memories of Virtual worlds were more confused with Real origins than vice versa indicates that those qualities inherent in the schema for identifying Real worlds are more conspicuous in judging the qualities that these origins share. Also, given that the qualities for memories of Virtual worlds were not distinctive and similar to those characteristic of Real worlds, it seems that the experience created by virtual reality technology forms a subset of the Real experience in memory.