CHAPTER 7

THE RESULTS OF MY COLLABORATIVE WORLD

This chapter reviews my overall thoughts from this Masters Project, my pilot test data, and considerations for more research. Through studying VRML 2, the Java External Authoring Interface, and client/server design, I came to the conclusion that two extremes of flexibility are possible in 3D virtual worlds. I could use the available technology to create a virtual chess board game where the official game of chess rules would be enforced by the server. Or, I could design a virtual world where nothing happens until participants bring their own objects into the world and make something happen. The latter might be a world which is appropriate for learning a foreign language as each participant adds the objects to the world that they feel comfortable talking about using the foreign language under study.

Between the two extremes of world flexibility, there exists a broad spectrum of possible multi-user world rule sets. I tried to make my world fall near the center of a flexibility scale. My world has a fixed context, but much flexibility around that context. I found participating in such a world quite enjoyable and more rewarding than some other experiences I had visiting existing 3D multi-user worlds. Building and participating in such a world brought me to consider some interesting questions and conclusions.

Overall Thoughts

After thinking about VRML, Java, 3D virtual worlds, and networking for twenty months, I finished the Marbles world project with some thoughts that I did not entertain when I began the adventure. The following section documents some of my more prevalent thoughts.

1.The External Authoring Interface works OK on Pentium class machines.

When I started building a client/server architecture for sharing VRML 2 worlds on the Web, I was not sure PC technology had advanced enough to be an appropriate platform for my world. I was extremely pleased with the potential that I saw when I placed my first virtual marble on a flat piece of wood and watched it move about. That first marble moved quite naturally on my 90 MHz, 16MB RAM PC.

By the time I finished building my world, I realized that I could share a realistic looking world of 4 moving marbles, 40 fixed objects, and 5 moving objects, all with their own embedded physics, over the internet. For me, that level of performance is quite satisfactory as a platform to begin building interesting shared worlds. I do envision some optimizations I could apply to my code in order to speed things up further. For example, my collision detection strategy compares the location of each movable object to the location of every other object without consideration to any regions on the board. For 4 marbles and 40 objects, my code makes 160 comparisons per animation loop. Probably, with some added logic to the collision detection code, I could cut those comparisons down to 20 or so per animation loop with minimal added overhead.


2. My world could be placed within any other world.

As I created my world, I kept reminding myself that my world could be contained inside of any other 3D architecture imaginable. I continue to be fascinated with the possibility of creating smaller worlds of interest that cyber-participants can carry around with them and use with others. VRML 2 is a language with no inherent unit of measure. If I had the computing power available to me, I could simulate interactions with objects by starting at the atomic scale and build right up to the scale of galaxies just by building each object out of the appropriate virtual atoms. Any VRML object made available on a Web server can be incorporated into any other VRML world. A single VRML object created by a single person and placed in a single VRML 2 file on a Web server could almost immediately appear in millions of different VRML worlds if the network passed it around and others made reference to it in their worlds. The next step is to create objects that can interact with other objects. It appears to me that Java could add behavioral logic to virtual objects until VRML finds a way to standardize the process. Then, smaller objects of interest can come alive in virtual worlds all around the planet. My world, just like any other VRML and Java based world, could literally become an overnight sensation.

3. Lots of Interesting worlds to build with this technology.

As I created my world, I kept thinking of all the other interesting worlds that I could be creating instead. Almost any existing board game could be implemented using this technology. But, I am most interested in worlds that allow participants to design new worlds or board games and try out different rules to stumble across successful designs. I believe this technology will allow groups of geographically dispersed people to design new worlds which no one individual in the group would have ever imagined on his or her own. I believe this technology could allow consumers to visualize new products and gain the support of other consumers who could then contact manufacturers about making the product. Spontaneous connectivity of groups that share virtual worlds without consideration of geographical distance is an affordance of the Web which has never existed before. It is very possible the most interesting worlds have not yet been imagined.

4. There is a lot that can be done with latency.

Although I found it not feasible to use this technology to create a Quake world or Doom world that would keep up with the latest “shoot-em up” games, I did find it to be a technology that would work well with any board game that requires contemplation and reflection before each action is taken. As long as decisions need not be made in the sub-second time frame, any simple world that involves sharing information could be developed with this technology. 3D chat worlds work fine with the latency of the internet, but it is time to reach beyond simple chat worlds.

5. A server is more unbiased than many moderators or facilitators.

As a person who always disliked participating in games where people cheated and who sometimes disliked how long it took others to make a move, I found the use of a server as a facilitator to offer some really great benefits. A server enforces rules reliably without a chance of personal bias against any player (unless the bias is programmed in). A server obeys agreed upon time limits. And, in the extreme, a server can actually take action on behalf of a participant if need be (an interesting possibility is to finish a game for a player who leaves the world before the game is finished). A server can also create teams and connect players in a fair and unbiased manner without hurting anyone’s feelings.

Outside of game playing, a server can also be programmed to enforce rules to provide fairness to a group discussion. A server can make sure each participant gets equal time in group communications.

6. There is a lot of potential for a Java 3D API.

By the time I finished using the EAI, I realized that most of the value of my world was provided by the Java instead of the VRML 2. Yet, much of the freedom I gained from using VRML 2 was provided by the VRML 2 viewer. The VRML 2 viewer provides me with the ability to explore my world with six degrees of freedom. I certainly would not have wanted to program that functionality myself.

Yet, if there were Java classes I could use to easily create the VRML 2 viewer functionality I liked, I might not want to limit myself to VRML 2 as a file format for 3D models. I believe the approach Sun Microsystems, Inc. is taking with designing their 3D API is a valid one. With Sun Microsystems, Inc.’s 3D Java API, I will be able to load a Java based scene graph with any 3D file format I wish by using the appropriate loader class. I see much potential in a working implementation of a Java 3D API. Although others have been working on Java based APIs for sharing VRML worlds, Sun Microsystems, Inc. should be able to leverage their API through their association with other Java innovations.

Experiment Results

I ran twelve subjects through two simulations each. Each subject was placed randomly in a group with two other participants. The first simulation had a group try to reach goals in a world designed by another group. The second simulation had participants design the rules and the world before attempting the same objective in that world as well. I compare results between the two groups that interacted with the same world. The first group participated in a world the second group had designed. The second group participated in that same world with the advantage that they had already participated in a world of another group’s design. Table 7.1 shows the simulation scores attained by the participant groups.  The first row in the table presents the number of simulation frames the group required to finish the simulation when that world had been designed by a previous group.  The second row in the table presents the number of simulation frames the group that designed that world required to finish the simulation. The lower the score the better.


Table 7.1 Simulation Completion Times

World Iteration Number:

1

2

3

4

Play Others’ Design:

1441

2200*

710

1646

Play Own Design:

1116

1805

685

1178

* Estimated as group only reached 9 of 10 goals in maximum time of 2000

Not surprisingly, in all four cases, the group that had designed the board completed the simulation faster than the group which had no influence in its design. On average, the designers completed the simulation 20% faster than the non-designers. Yet, in all four cases, the participant groups ran the simulation where they were not designers first. They ran their own designed world second. By observing the individuals work, I noticed that they experienced a significant learning curve and I feel that their better performance in their designed world could be attributed to the fact that they were just better at performing in general since it was their second time around. Two types of learning were taking place during the simulations: First, technical learning involved understanding the interface and how the system responded to human interaction. Second, collaborative learning involved understanding how to work with others when success relied on that cooperation. I believe the participants had moved up the learning curve significantly between the first and second simulations. Yet, I also believe they still had a lot more to learn after participating in two simulations.

Figure 7.1 shows the results of the ten objective questions of the post-participation questionnaires.  For each question, the left-most bar represents the first questionnaire filled out by participants immediately after playing a collaborative world of another group’s design. The right-most bar represents the second questionnaire filled out by participants immediately after playing a collaborative world of their own design.  The y-axis represents the average selected by participants on a scale of 1 to 7 where 1 means strongly disagree and 7 means strongly agree.

Figure 7.1 Post-Simulation Questionnaire Results

Perhaps the differences between the two questionnaires is not as significant as the average response value for each question. I now discuss each of the ten questions and the responses in detail:

Question 1: I enjoyed participating in the simulation.

 

Mean

Hi

Lo

 

Questionnaire 1

5.58

7

4

Questionnaire 2

6

7

4

 

I am pleased to find that the participants generally enjoyed participating in the simulation. Participants enjoyed participating when they were involved in the design more than when they used another group’s design. Of all 24 questionnaires completed, not a single participant answered on the disagree end of the scale. I believe their enjoyment would improve even more had they not experienced some of the technical difficulties I discuss later.

Question 2: I became immersed in the simulation with little awareness of other things going on around me.

 

Mean

Hi

Lo

Questionnaire 1

5

7

2

Questionnaire 2

4.92

7

1

 

I found the responses on question 2 to vary greatly by individual. I am pleased with the level of immersion reported by participants, but am not sure of how they define immersion. I understand why the participants reported as sense of less immersion in the design phase. Many participants did not watch the computer screen while others were taking their turn designing. Instead, participants looked away during that time. Certainly, I could have shown the movements of design pieces of other participants while they were moving pieces about on their monitor. I thought that would be distracting. I now believe participants would have felt more immersed if they could follow other participant’s actions. I found it very curious that most participants were not planning their next upcoming move more when it was not their turn. I observed participants who reacted as if they should not do anything unless it was their turn, including thinking about the task at hand.

Question 3: I would like to participate again with the same group of participants.

 

Mean

Hi

Lo

Questionnaire 1

5.42

7

2

Questionnaire 2

5.58

7

3

 

Overall, participants were quite enthusiastic about participating again. I am pleased with the fact that participants wanted to go again even when they encountered technical difficulties. Participants were quite eager to participate with the same group even though they did not know their fellow participants in many cases. Participants were more eager to participate again after they had been involved with designing the world.

Question 4: I would like to participate again with a new group of participants.

 

Mean

Hi

Lo

Questionnaire 1

5.5

7

4

Questionnaire 2

5.83

7

3

 

Participants were even more enthusiastic about participating with a new group of people. I observed that each participant had his or her own unique strategy which did not necessarily work well with other strategies within their group. I believe participants wanted to go again with a new group because they had high hopes the next group would think more like themselves. Again, participants wanted to participate even more after being involved with the design of the world. I also believe that they wanted to go again because they were learning some things that they needed more time to figure out completely.

Question 5: I thought the technology performed well.

 

Mean

Hi

Lo

Questionnaire 1

4.83

6

4

Questionnaire 2

4.33

5

2

 

Participants did not rate the technology particularly high, but I am pleased with their responses given the technical problems encountered. Of all 12 simulations, two had minor problems that effected results and two had major problems. The other eight simulations ran well enough that the participants were not aware of the problems. I believe the major cause of technical problems was the fact that users used their mice as pointing devices much more aggressively than I anticipated. The CosmoPlayer browser obviously runs code whenever the user moves the mouse to track positioning. When I ran my timing studies, I did not move the mice so often and was overconfident that the worlds would stay in synch.

Twice the technology crashed and I had to start the simulations over. The first time the group had just finished the design of their board when the Java console returned an “Invalid Field” exception. I had not experienced that error in any of my testing. I believe that crash did not effect their results since I started the simulation again from that exact point after a five minute hiatus. The second crash occurred 680 frames into the active simulation. That crash was much more frustrating for me. I had the group fill out their questionnaires at that point in time, but then restarted the active simulation to get a timing result from the group. The world ran acceptably the second time. I did not have time to investigate the reason for the crash. I did notice though that the group of the crash had the most active mouse habits.

The two simulations with minor problems could also be attributed to mouse use. In those simulations, frame rates sped up and slowed down dramatically on the fastest PC with the participant at that computer feeling a bit frustrated and annoyed. Eventually, the speed swings settled down in both cases, but not after significantly affecting the experience for that participant.

Question 6: The interface was natural to use.

 

Mean

Hi

Lo

Questionnaire 1

4.67

6

2

Questionnaire 2

4.5

6

3

 

Participants did not rate the interface particularly high, but did a good job of indicating what would have made them rate it higher. Although I reviewed the significance of each control with them before the simulation, many told me they had forgotten which control was which. I believe they would have rated the interface better had I included labels for the controls. I felt that labels would take away from the immersive experience and had decided not to use them. Others complained about the cues from the slant queue wishing the slants were updated more often. I now believe I should update the slant queues as soon as a new slant is chosen by a user. Instead, I was waiting until the next slant change. By waiting, users were unable to tell what other participants had selected during their turn until after they took their own turn. A participant could not make an informed decision under those circumstances.

Lastly, participants found the movement of the board to jump at times. I believe that I could smooth the movement out by changing some of the VRML code, but I think some of the jumpiness is from code embedded in the VRML viewer itself. I need to spend more time investigating my options for smoothing out the board movement.

Question 7: I felt like I was treated as an equal in the simulation.

 

Mean

Hi

Lo

Questionnaire 1

6.5

7

4

Questionnaire 2

6.17

7

3

 

The participants overall strongly agreed that they were treated equally by the server. I am pleased with this result since I suggest that a server can make collaborating much more enjoyable in certain situations since it is a just and fair facilitator of human communications. I believe the only reason participants were not unanimous in strongly agreeing with the statement was that during the rounds with technical difficulties certain participants thought their computer was misbehaving worse than others. I thought it interesting that participants perceived the difference without actually seeing anyone else’s machine. The fact that participants rated fairness lower during design rounds supports my belief. Both serious technical problem rounds were design rounds.

Question 8: I learned something about collaboration during the simulation.

 

Mean

Hi

Lo

Questionnaire 1

4.92

7

2

Questionnaire 2

5.42

7

3

 

Overall, participants agreed that they learned something about collaboration by participating in the world. I learned a lot about collaboration just by watching them try to collaborate and so was hoping for even higher results. I think they would have learned more had they kept on working on additional boards and running additional simulations. Participants learned more about collaboration when they collaboratively designed the world. I noticed many individuals who had great strategies not get the response from others they needed to get their strategy fully enacted. Other participants realized what could have been done later on had they cooperated better. As I mentioned earlier, I was surprised they did not spend more time thinking when it was not their turn because they could have figured out more during the design phase.


Question 9: I had more control of what took place than I anticipated.

 

Mean

Hi

Lo

Questionnaire 1

4

6

1

Questionnaire 2

4.67

7

3

 

The results to question 9 are difficult to analyze. Participant answers varied greatly on the scale. I was expecting the results to be much lower than they were since I did not think participants felt they had much control. Their subjective answers report their frustrations. Perhaps they did not anticipate having much control in the first place. Participants found that they had more control when they controlled the design of the board.

Question 10: I would like to have more control of what happens in the simulation.

 

Mean

Hi

Lo

Questionnaire 1

5.08

7

3

Questionnaire 2

5.08

7

1

 

Participants generally wanted more control, but many reported that they learned more about collaboration by not having a lot of control. I find it interesting that participants did not rate this question higher on questionnaire 1 than questionnaire 2 since on questionnaire 1 many mentioned how they wanted the control over the design which they got in the second simulation. Perhaps the control over the design did not make the difference they had hoped for.

The following is a review of the comments from the four subjective questions asked on the questionnaire:

Question 11: Please describe your frame of mind when you started the simulation.

First Questionnaire:

 

1.      A little tired, a little excited

2.      Enthusiastic

3.      Tired

4.      Confused but anticipating

5.      Curious

6.      Tired from writing papers but otherwise in good spirits

7.      I was confused at first because of the numerous operations available to me

8.      Curious

9.      The demonstration helped clear some of the cloudiness I had after reading the introduction so I was much better prepared, but still a little less on the what to do or the purpose.

10.  A little tense and wanting to do well.

11.  Open to whatever was going to happen.

12.  Interested in whatever would occur.

 

Second Questionnaire:

 

1.      Looking forward to trying

2.      Thought ahead. Was excited about the game.

3.      Tired

4.      A little tired, a little excited

5.      Alert

6.      Planning

7.      Hopeful about being able to create a good board

8.      Anticipation of playing my game

9.      Anticipating

10.  Curious again

11.  I was uncertain about where to place the obstacles at first.

12.  Having already done it once, I was more aware of the consequences/results. So I was more comfortable.

 

I found the comments before the design simulation to be more specific to the task at hand than the first simulation. Most individuals appeared to be open-minded about the upcoming simulation instead of holding any negative expectations.

Question 12: Please describe your frame of mind during the simulation.

First Questionnaire:

 

1.      Fairly engaged

2.      Discovery

3.      Tense, trying to remember rules

4.      Frustration, anticipation

5.      A bit frustrated by the lag, but still having fun

6.      More actively engaged in activity because I saw evidence of my direct manipulation of objects.

7.      I became more focused as the game moved along.

8.      Adapting to the delayed response

9.      The slowness of the computer confused me a bit because the simulation crawled.

10.  Frustrated with the lag and my catching on rather late about how to plan my movements well.

11.  Concentrated on the task at hand trying to set up the next shot

12.  Analyzing - trying to find out how best to play and what was going on

 

Second Questionnaire:

1.      Intense, so much to think about

2.      Was trying to find the best ways to reach the goals

3.      Busy

4.      Patient, a little confused

5.      Absorbed

6.      Extrapolating

7.      Heightened awareness and focus on what was happening

8.      Eager to sink the balls to prove our design was good

9.      Go get ‘em!

10.  Idling to some extent

11.  I had a better idea about where I wanted to place the obstacles as time went along.

12.  The smoother frame rate made it easier to comprehend what was happening, but the screen movements were far from consistent so I was still a little confused.

 

I found the participants to be quite engaged with the simulation based on comments about their thoughts during the simulation. I did not notice any strong differences between the first and second simulations as both sets of comments seem to reflect similar thoughts.

Question 13: Please describe your frame of mind now (after the simulation).

First Questionnaire:

 

1.      A little tired, a little happy

2.      Enthusiastic

3.      Relaxed, more alert

4.      Ready to go again

5.      Interested in constructing my own board

6.      Interested in moving on to a game that I design

7.      I now realize what was taking place. I now feel more comfortable.

8.      About the same as before

9.      Though I was a bit confused, during the simulation I comprehended

10.  Happy - I am on a date

11.  Want to do it again

12.  Curious about whether we could learn more and do better next time

 

Second Questionnaire:

 

1.      Still intense

2.      Want to play again, want to be more effective, more goal oriented

3.      Tired but alert

4.      A little tired, a little bemused

5.      Want to do over

6.      Contemplative

7.      Cool experiment. Fun. Glad to see what a difference my choices made.

8.      Intrigued about the possibilities of this technology

9.      Job well done, at least on my computer

10.  Same as during

11.  I wish I would have done a better job of placing my obstacles

12.  I can see how this simulation can be useful for future applications. Call me when 4.0 comes out (hey, its a popular version number).

 

I found the comments about post-simulation thoughts to be quite promising as most comments reflect a state of positive reflection about the simulation. I notice no significant difference between participant’s thoughts after design simulations and non-design simulations.

Question 14: Please list any frustrations you experienced while participating in the simulation.

First Questionnaire:

 

1.      Another participant was yelling.

2.      Others choosing too rapidly

3.      Had trouble remembering what tools on palette did

4.      Too slow, not enough obvious control over things (but I imagine that is part of the game)

5.      Effects seem to be of limited use when the lag time before their start and the amount of time they last are unknown.

6.      Delay in seeing future slopes (couldn’t tell if my team mate chose the slope or if the computer did).  The cues for what was next came up too slowly so I had to pick a slope based on choices before last.

7.      The effects didn’t work as well as I would have liked. They did not respond as much as I hoped.

8.      slow

9.      Just the slow server

10.  See above (frustrated with the lag and my catching on rather late about how to plan my movements well).

11.  None I can think of

12.  I wasn’t really frustrated. I enjoyed learning.

 

Second Questionnaire:

 

1.      Other people not doing what I expected

2.      I felt my turn appeared for only short periods of time.

3.      None

4.      Group answers were unclear

5.      Remembering what tools did what

6.      Refresh rate of future slants

7.      Didn’t know we completed goal - one computer had the feedback.

8.      During the rules voting phase, the answers returned did not seem to be in the same format as the questions. For example, I chose “3” viscosity and then saw the selected coefficient was .45 or something.  what is the relationship of scale?

9.      My computer finished before others - not quite as fully integrated as it needed to be but great fun.

10.  Again, it was slow

11.  I was frustrated that my group apparently used most of the same types of obstacles instead of using some variation.

12.  As before, the screen wasn’t always consistent. This made it harder to make/anticipate moves - both mine and those of other players.

 

I will address all frustrations from question 14 here as I think the comments made by participants were especially insightful. As for comment 1 of questionnaire 1, I would expect participants not to hear another participant’s yelling if they were dispersed all over the globe. In response to comment 2 of questionnaire 1, I would expect participants not to choose too quickly once they learned how to successfully collaborate in the world. Choosing too quickly is not an effective strategy just as choosing too quickly in chess is not a good strategy for winning chess matches. As for comment 3 of questionnaire 1 and comment 5 of questionnaire 2, I agree that I should strongly consider adding labels to palette objects initially during the learning phase. In response to comments 4, 8, and 9 of questionnaire 1 and comment 10 of questionnaire 2, I believe there is a place on the Web for slower, reflective activities. Marbles world is not meant to be a typical video game experience. In response to comments 5 and 7 of questionnaire 1, I think I should consider helping out first time participants by giving them more information about effects until they understand how to ascertain the timings themselves. In response to comment 6 on both questionnaires, I agree whole heartedly that I should update the slant queue immediately instead of waiting for the next slant change. In response to comment 1 on questionnaire 2, I believe participants’ actions would come together more after participating together for a period of time. My whole objective centers around the belief that my world would teach better collaborative behaviors. In response to comments 7, 9 and 12 of questionnaire 2, I must make the technology work acceptably each time as technical problems effected the opinions of participants greatly, although I believe most expected problems at some level and that helped keep them enthusiastic. I believe they saw their feedback as important in the process of making a better simulation. Technical problems cause me to give up on a virtual community more than any other factor. I have no reason to believe I am different in that behavior than a typical Web citizen. In response to comment 8 of questionnaire 2, I agree that I should provide more integrated feedback to group decisions. In response to comment 11 of questionnaire 2, I believe I must provide a mix of obstacles that make for interesting combinations of strategies. I have no doubt that better obstacles could have enabled better designs by participants. Yet, I found two of the groups to have created very successful looking designs using the obstacles they were given.

What To Do Next

I have found the results from this project to be quite encouraging, yet I believe there are some steps I could take to make the next go-around even more successful. Most importantly, I need to fix the technical problems that effected 4 of the 12 simulations. The server I wrote and used for the project contains very few lines of code dedicated to the inter-world communications that keep multiple copies of the world in synch. In fact, each client reports timing data only once every 50 frames. I can experiment with more frequent time reports which show promise of correcting divergent behaviors more rapidly. I can consider stopping a client process when it gets way ahead of  other clients in order to let the other clients catch up gracefully.

I also can experiment with other interface designs including using labels for controls until participants feel they no longer need them. I definitely must consider updating the slant queue feedback as soon as a slant is chosen by any participant. I had not thought through how important the upcoming slant information would be for taking the best action based on the prior participant’s turn. I would like to allow a user to drag and drop a new obstacle from the obstacle palette instead of having to click first, move the mouse, and then drag. Currently, using the external authoring interface there is no easy way to create a new VRML object without clicking on something in the world first. Perhaps I can come up with a creative solution if I give the problem more thought. I also must consider providing better rules information for first time participants. Although I believe a seasoned user could figure out the rules based on interacting with the simulation, I would not want to lose potential long-term community members due to initial frustrations.

Once I fix the existing technical problems, I can open up the world to others in a way that they could help design new rules, obstacles and effects for inclusion with the simulation. I personally have 50 or more ideas I would like to implement myself, but I believe I must allow the community to come up with ideas. I feel that most of my ideas would be independently suggested by other participants in due time anyway. As computing power increases, I believe the obstacles and effects in the world could become very sophisticated. Marbles world could become significantly more active than its current implementation and the marbles might even be able to take on different shapes and behaviors. Marbles world would not have to exist in a single x-y plain but instead could include multiple boards stacked on top of each other along the z-axis. Yet, to really test my ideas and beliefs, I need to create a community around this project and let the community drive the project in whatever direction they want. Perhaps I would become a facilitator more so than a designer. A community working on a collaborative Web project needs technical assistance in determining what is possible given computing trade-offs related to resources. I have experienced the frustration of working on technical ideas without any guidance as to what is a reasonable design for a given computing platform. Many of my Lotus Notes applications were designed without any sense of whether they could perform well on a computing platform my users would be using. I need to come up with reasonable guidelines for obstacle and effect designs of community members.

In the very long-term, applications like marbles world should be able to be experienced by hundreds or thousands of participants at the same time. Server-less, distributed architectures show promise of being able to support large amounts of users. I can begin to look at alternate architectures which rely on multicasting in order to scale up marbles world to larger communities.

Yet, in the short-term, I can consider building other applications which use the architecture I used for marbles world. I believe this architecture will work well for group design and play of golf courses, croquet lawns, ski slopes, and all kinds of applications where the rules are already somewhat standard and require little new learning. To me, the interest will continue to be in building interesting communities on the Web where people meet for specific purposes which are engaging and educational. I continue to hope that technologists can develop the infrastructure with which Web participants can create the communities they want most. I hope Web surfers will be able to work together to create interesting worlds. I believe my work shows that many users would be interested in trying out such communities at least once. I found all my subjects to be very enthusiastic before, during, and after their initial two simulations. In fact, all but one requested to be invited back once I spent more time building a better application.

I have learned that building networked applications is very difficult to do by oneself. The server code turned out not to be the monster I had anticipated thanks to existing Java classes that hid the complexity. Yet, the testing certainly was difficult when working with computers that physically existed at least 20 feet from each other. I can invest more time in creating better test methods. Lastly, I must continue to consider new technologies to use when implementing 3D multi-user collaborative worlds on the internet. The Java 3D API from Sun Microsystems, Inc. shows great promise as a delivery platform. As my creative thought process  settles into more concrete paths of action, I can apply better engineering techniques to quantify and test my development efforts. Through this project, I have learned that the creative thought process can continue for weeks at a time. I need to run more concrete tests before I can begin to weed out ideas based on merit.