Appendix A - GreenSpace Prototype User's Guide
This prototype of the GreenSpace virtual teleconferencing system was
constructed in order to explore various aspects of my thesis, entitled Spatial
Culling of Interpersonal Communication within Large-scale Multi-userVirtual
According to this thesis, users and networks of traditional multi-user virtual
environment applications quickly become overloaded by positional, audio, and
other types of interpersonal communication data from other users. This thesis
explores intuitive and efficient techniques to reduce the cognitive and network
loads with spatial culling.
This prototype allows many users to connect, via the Mbone, to a
serverless (peer to peer communication only) virtual environment representing an
architectural or teleconferencing application. Within this environment the users
are able to see each other move around and can "listen" to what the other users
have to say (audio is simulated by textual messages). For comparison, this
prototype can be run in broadcast or multicast modes. While in multicast mode,
the spatial culling technique is employed.
A.2 Obtaining a Userid
Before using the prototype you should request a unique id by mailing
firstname.lastname@example.org and asking for a gsuser userid. As soon as I can,
I will mail you a unique userid that is necessary for operating the prototype and
will be used to identify any feedback you submit.
A.3 Starting the System
This prototype will run on any SGI in the HITLab or in CSE (both the Grail
and gws machines, but not the iws machines). The program will not function, even
remotely, on any other machines, as it requires your display to be GL capable.
Once you are sitting at an appropriate machine and have your userid, you
should start both modes, broadcast and multicast, simutaneously from two
different terminal windows.
Figure A.1: Example Desktop Setup for Prototype
Starting Broadcast mode:
This mode is started by executing the following at the command line:
In this environment you are tuned to a single multicast channel for sends and
receives, as is everyone else in the environment. You receive position updates
and "audio" broadcast from all users in the environment, regardless of their virtual
position relative to you. This is considered the base case, as it was the typical
approach to provide communication between users in some older virtual environ
ments systems, including SIMNET. Unfortunately, this model does not scale well.
As the number of users increase, the networks and users become overloaded
gsu -B yourUserid
Starting Multicast Mode (w/Spatial Culling):
This mode is started by executing the following at the command line:
This environment is automatically spatially partitioned into a grid of 10 by 10 multi
cast groups. There is also a 101st multicast group that is used to broadcast group
change announcements. As you move through the environment you send your
position and communication data to the group that you are currently in, while lis
tening to data being sent to that group and its 8 nearest neighbors. These 9 multi
cast groups form your communication area. When users are outside of your
communication area, you can see their approximate position based on their group
change announcements (indicated by showing the user in the center of that com
munication group). This is similar to the approach detailed in a NPS paper entitled
Exploiting Reality with Multicast Groups, which was presented at VRAIS `95.
Furthermore, even when a user is in your communication area (which is
rather large), you may not receive their "audio" data. The "audio" data is only
displayed to you if you are within a specific virtual distance of the user sending the
data (in this case, the width of a communication group is also the the radius of
your hearing). Even within this radius, audio dropoff is simulated by informing you
UserX is saying something, but is too far away to hear", if
they are not within half the width of a communication area.
Unlike the broadcast model, this model does scale well, both in terms of
the network load and the user's cogntive load.
A.4 User Interface
After a few moments of establishing the proper network connections, a
GreenSpace User window will appear (the world model is a modified version of a
VRML [Pesc94] demo file, from SGI, called the Palladio [Ferr95]).
The user interface is based on the OpenInventor [Wern94] WalkViewer,
which you can get information on by clicking on the [?] icon in the right button
Before doing anything else!!!
Please note that an unfortunate consequence of using the Walk Viewer is
that it chooses our walking speed for us and it picks a rather poor speed for this
demonstration. Therefore, before proceeding, please do the following!
At the low framerates Inventor yields with this size model, you will be glad you
reduced the speed.
- hold down the right mouse button, in the GreenSpace window, to pop-up
- select "
Preferences..." to open the preferences sheet,
- click on the "
Speed/2" button five (5) times,
- then you may close the preference sheet and proceed with the demo.
Figure A.2: Prototype User Interface
If you type the letter
H in the GreenSpace window you will see what
commands are available.
Now feel free to explore the environment and look for other users. Keep an eye in
the terminals where you started gsu, as that is where the simulated audio mes
sages will be displayed. In order to generate simulated audio for other users to
hear, you can either switch to select mode (by clicking on the arrow in the upper
right hand corner of the window) and clicking on objects in the scene, or you can
C will toggle on and off simulated audio chatter,
Q should be used when you are ready to quit the demonstration,
spacebar can be pressed to toggle on and off the display of the communi
cation area bounding boxes,
T key while over the GreenSpace window to start generating a stream of
Pressing the spacebar will toggle on and off the display of the
communication locality bounding boxes. Any users within those boxes are
sending you position and audio data, while all others users are only reporting
approximate positions to you.
Figure A.3: Another User in Prototype Environment
In fact, that box is really the bounding box of nine communication groups,
which you are listening to all nine of and sending your data only to the group you
are in (via group change announcements).
Notice that the other users' faces are grey, so that you can tell when they
are looking your direction (the model for the users is a modified version of the
triguy_w_shadow.wrl people found at Stonehenge [Ferr95]).
As you are wandering around the space, try to follow what a particular user
is saying or doing. As the number of users increases, you should see this become
increasingly difficult in the broadcast mode.
Note that it is difficult to see the point of all this when the environment is not
well populated. In order to see anything of use, there should be at least one other
user in the environment with you. So it would be good to ask a friend to join, or
send mail to me to let me know you are entering so I can try to join you.
A.5 Feedback Request
After spending some time experimenting, please take some time to answer
a few questions (see Appendix B) about your experience using this prototype!
- Is the interface intuituve?
- Does it feel natural as you move between communication groups?
- Is it confusing to only get approximate postions from far away users?
- If you had real audio, could you communicate effectively in such a system?