Working applications, rather than theoretical models, drive progress for
virtual environment research and development [Brut95]. In this chapter we will
examine several application domains which the GreenSpace system could
support, that will direct the development of this system. This will provide a context
for understanding the usefulness of the Multi-User Systems in the following
chapter and help us to understand what features will be important to support in
the communication control prototype.
The GreenSpace infrastructure will be able to support a wide variety of
applications, so this chapter does not intend to exhaustively list them all. Instead,
the focus will be on those application areas which seem likely to benefit the most
from an integrated interpersonal communication system. We cannot expect any
single communication control system to provide useful and appropriate support
for all conceivable applications, since interpersonal communication itself occurs at
the application level. However, we should expect the communication control
system we develop to provide intuitive, efficient, and scalable interpersonal
communication for a wide-variety of applications, some of which are described
Nearly all applications of a multi-user virtual environment system could be
categorized as Computer Supported Cooperative Work (CSCW), with the obvious
exception of entertainment applications; CSCW is an extremely broad area of
research. Many CSCW applications are asynchronous and therefore would not be
appropriately served by a multi-user virtual environment system. Traditionally,
CSCW applications are decomposed into four categories by a time-space matrix
Table 2.1: CSCW Systems Matrix
| | Same Time | Different Times |
| Same Place | face-to-face | asynchronous interaction |
| | | |
| | (classrooms, | (project scheduling, |
| | | |
| | meeting rooms) | coordination tools) |
| Different Places | synchronous distributed | asynchronous distributed |
| | | |
| | (shared editors, | (email, bboards) |
| | | |
| | video windows) | |
This traditional decomposition assumes that the users of a system cannot
both be in different physical places at the same time as perceiving themselves to
be in the same virtual place. A multi-user virtual environment system, such as
GreenSpace, allows a CSCW application to be both face-to-face and
Three of the following sections describe CSCW application domains that
have been or could be shown to be well served by immersive communication
systems. The final section describes the relevance of entertainment applications.
2.2 Collaborative Design
The first computer-aided design (CAD) system was Ivan Sutherland's
Sketchpad [Suth63]. Users were able to manipulate graphical objects using a
hand-held pen, rather than typing in keyboard commands. This was the beginning
of direct manipulation, which is at the very heart of immersive systems. Direct
manipulation interfaces, and therefore immersive interfaces, possess the
following four properties [Shne83]:
- continuous representation of the objects of interest,
- physical actions instead of complex syntax,
- rapid, incremental, reversible operations that are immediately visible, and
- layered approach to learning that permits usage with minimal knowledge.
This approach allows users to apply a priori knowledge about the objects
and interactions in their application domain directly to the objects in the software
application with limited additional training. The benefit of this approach is easy to
see in virtual design applications.
2.2.1 Architectural Design
Matsushita has deployed a virtual reality system that allows shoppers to
immersively design their kitchen before ordering the appliances they need to
complete the design [Auks92]. After downloading a CAD model of the user's
kitchen into the system, the user can immersively walk through the kitchen, then
place, move, or modify the appliances.
At the University of North Carolina (UNC), researchers have been creating
architectural walk-throughs of various buildings on campus for years, including a
model of the new Computer Science building before it was built [Auks92]. Prior to
its construction, the walk-through was used by the future inhabitants and the
building's architects to evaluate the design. This led to changes in the building
design which would have been expensive or impossible to make after the building
UNC has been a pioneer in the area of virtual architectural walk-throughs,
which has led them to the development of many unique interface devices and
techniques. To make walk-throughs more natural, they adapted a bicycle steering
wheel to a treadmill, so that users could walk and steer their way around the
virtual buildings [Auks92]. To make it possible to render highly-complex
architectural models, such as UNC professor Fred Brooks' home, at sufficiently
high frame rates for immersion, they have developed a technique to quickly
determine what parts of the model may be visible from the user's current
perspective [Lueb95]. This technique is based on the assumption that the vast
majority of objects in an architectural model tend to be occluded by opaque
objects such as walls, during a walk-through. They spatially partition the
environment into a set of "cells" (such as rooms) and "portals" (such as doorways)
between those cells. Then they determine which cells might be visible from the
user's current cells, by seeing which portals are visible, before deciding which
cells to render. They have shown this to dramatically increase the frame rates for
some architectural walk-throughs.
The HITLab began researching architectural applications by studying the
way people perceive real versus virtual spaces. This was done by creating a
virtual walk-through of the Henry Art Gallery on the University of Washington
(UW) campus and comparing the perceptions of architects that had walked
through the real building to those that had walked through the virtual model
More recently, the HITLab has joined together with the UW College of
Architecture and Urban Planning to create the Community and Environmental
Design and Simulation Laboratory (CEDeS Lab). Together they have developed a
simulation of the Seattle Commons, a proposed urban design for downtown
Seattle, which has been used by various civic leaders to view some of the impact
the project will have on the city [Jone95]. The CEDeS Lab then produced a walk-
through of an addition to the Henry Art Gallery, before construction began, which
begins the compilation of a 3D database for the entire UW campus and
surrounding community. Architects and urban planners will then be able to get an
early glimpse of the effects of new construction on the UW campus. The CEDeS
Lab has also begun working with the GreenSpace project to provide a
collaborative immersive system for architectural design reviews.
2.2.2 Industrial Design
Boeing was perhaps the first company to have developed an industrial
application for virtual reality, when they created a system to immersively view and
interact with new aircraft designs [Auks92]. This allowed designers to explore the
design of new aircraft before building expensive physical prototypes, gaining
many of the advantages found with architectural walk-throughs.
The Lockheed Simulation Based Design (SBD) project combines
collaborative CAD tools with virtual environment simulations to provide engineers
at remote sites a way to collaboratively design wire cable harnesses for routing of
electrical cables in aircraft and submarines [Palm95].
When the CSCW design environment takes on the appearance of
designers collaborating around a conference table, these applications more
closely resemble teleconferencing, which can have unique characteristics, and
are therefore discussed in the following section.
2.3 Virtual Space Teleconferencing
Virtual Space Teleconferencing (VISTEL) [Ohya93] applications allow
geographically separated people to meet without having to transport their bodies.
Rather than just facilitate collaborative work, VISTEL systems aim to provide a
rich interpersonal communication environment by "reproducing various aspects of
face-to-face conferencing" [Take93]. VISTEL applications tend to place more
emphasis on the quality of the representation of other users than is typical in the
collaborative design applications previously described. This provides a more
personal and comfortable environment for people to collaborate in.
The Advanced Telecommunications Research Institute (ATR)
Communication Systems Research Laboratories, in Japan, have done an
impressive array of research into the various aspects of human figure and facial
real-time motion detection and synthesis. For a VISTEL application they
developed, the facial features of users were detected in real-time by visually
tracking tape marks attached to facial muscles, then sent to the remote site where
the tape mark positions were used to morph a wire frame model of the users face
in real-time [Ohya93]. They have applied a layered approach to human figure
synthesis whereby the skeletal model of the user is controlled by position sensing
on the user's physical body, then the muscle and skin layers are deformed and
animated in real-time [SinK95].
Furthermore, ATR researchers have applied the combination of gesture
recognition and natural language interaction techniques to VISTEL environments
[Yosh95]. This allows the system to take action on collaboratively controlled
objects while the user is speaking about them and gesturing towards them. For
example, when a user points at the roof of a house model and says "paint that
blue," the application could change the color of the roof to blue for all to see.
Combining gesture recognition and speech interaction with graphical displays
helps to overcome the limitations of each of the individual interface technologies.
Fujitsu has been involved in teleconferencing research for years, including
the public deployment (in Japan) of a graphical and text-based telecommunication
system called Fujitsu Habitat [Yosh94]. The GreenSpace collaboration between
Fujitsu and the HITLab is developing a VISTEL system. However, the underlying
software infrastructure will be general enough to support a wider range of
applications than typical VISTEL systems do.
2.4 Situational Training
Immersive situational training applications are well suited for the training of
tasks for dangerous or unaccessible physical environments. For example, pilots
are trained to handle dangerous flying conditions using immersive flight
simulators [Auks92]. Obviously, this is much easier and safer than attempting to
reproduce these conditions in actual aircraft.
SIMNET (Simulator Networking) is a system developed by the Defense
Advanced Research Projects Agency (DARPA) to create interactive multi-user
simulations of military battles [Allu91]. This application was designed to be
suitable for training tank commanders, helicopter pilots, fighter pilots, artillery
commanders, and higher echelon tactical commanders all in a concurrent real-
time battle simulation. The need for physical accuracy of terrain models and
equipment characteristics is obvious and important. The SIMNET system, and its
descendant IEEE 1278 Distributed Interactive Simulation (DIS) application
protocol, will be described in more detail in the following chapter.
The Institute for Simulation and Training at the University of Central Florida
and the US Army Research Institute teamed together to build a research test-bed
for training applications of virtual environments [Mosh93]. Their research
focussed on adding dismounted infantry, which were not included in SIMNET, to
battlefield simulations. Other researchers, in the US Army and elsewhere, have
worked on adding dismounted infantry to military simulations as well.
The Naval Postgraduate School Networked Vehicle Simulator IV
(NPSNET) is a virtual environment training system designed for multi-user military
simulations over the Internet [Mace95a]. The NPSNET system is capable of
simulating a variety of interactive miltitary vehicles as well as dismounted infantry
within large terrain databases. NPSNET is currently capable of supporting over
250 human and simulated players in a single battlefield simulation [Zyda95]. As in
VISTEL applications, it is important for the models of dismounted infantry to be
complex and expressive, so that other soldiers can pick up visual cues of general
body language or specific hand signals. To facilitate this, NPS uses a system for
human figure simulation called Jack [Badl93]. Jack provides a system for
animating fully articulated human models through a constraint based system.
Extensions to this system provide off-line production and real-time playback of
human motion, so that the current state of a soldier (such as "kneeling") can be
coupled with desired next state (such as "standing") to produce a realistic motion
between the two states (such as the complex motion of standing from a kneeling
position), without having to specify the motion of each individual joint [Gran95].
Researchers at the Sandia National Laboratories have developed an
immersive system for the situational training of nuclear facility inspection escorts
under non-proliferation treaties [Stan95]. These facilities are geographically
diverse and hazardous environments with limited access, even for future escorts.
Therefore, a model of the facility can be used to train escorts on how to give
proper tours to authorized inspectors, without having to gain access to the facility
prior to the tour. More than just an architectural walk-through, escort trainees can
be evaluated during virtual mock inspections with the application signalling if
simulated violations of the treaties occur are allowed to occur. This is another
application which makes use of the Jack system for creating realistic fully
articulated human model animation.
The Lockheed AI Center, in collaboration with the University of Southern
California Behavioral Technologies Lab, is also pursing research on virtual
environments for training applications [John95]. They are combining the
technologies of virtual environments and intelligent tutoring systems to create
training environments to immersively guide users through dangerous tasks such
as bomb disposal or fire supression.
The entertaining applications of virtual environments are difficult to ignore.
The engaging effect that direct manipulation interfaces have in general is
amplified by the immersive nature of virtual realties, making almost any virtual
environment entertaining to some degree.
Autodesk created some of the first entertainment applications of virtual
reality with Virtual Raquetball and the High Cycle [Auks92]. In Virtual Raquetball
users played a solo version of raquetball wearing an HMD and wielding a position
tracked racket. The High Cycle was a stationary bicycle which moved users
through a virtual environment, seen through an HMD, as they pedalled.
W Industries created the first mass produced virtual environment system
for entertainment applications, called Virtuality [Wald93]. This system, designed
for arcades, had both stand-up and sit-down versions that included position
tracked HMDs and hand controlled input devices. Various multi-user games were
available for either system, which aided in their popularity.
After the success of W Industries, many other companies entered the VR
entertainment market. There are now Battletech simulators and so called virtual
reality theme parks scattered across the country. Sega, a Japanese video game
company, has developed a series of "Virtua" arcade games (Virtua Racing, Virtua
Fighter, Virtua Cop, etc), which although are not immersive, do employ the 3D
interactive graphics techniques used by virtual environment systems.
Walt Disney Imagineering (WDI) remains the leader of location based
entertainment with the unsurpassed quality of their theme park attractions.
Recently, the WDI Virtual Reality Studio unveiled their first immersive "attraction-
in-development" at the Walt Disney World's EPCOT Center [Crui94]. Guests can
ride a magic carpet through a city from the animated motion picture, Aladdin,
exploring the city streets and interacting with computer generated characters.
Rather than attempting to simulate reality, as in VISTEL or Situational Training
applications, the Imagineers simulated being inside of an animated motion
picture. WDI will continue to strive to bring virtual reality entertainment, including
multi-user attractions, to the highest-level of quality attainable.