Conclusions and Future Work 7.1 Conclusions
A client/server architecture was developed as a flexible means of incorporating alternative input devices to control user 3D interaction within VRML applications. This architecture functions to separate the application from the input device. In so doing, the interface to the application is at a task level. This allows for the easy integration of new input devices.
Applications using this architecture were implemented by using a toolkit. The toolkit served as a platform for mapping specific devices to control commands for manipulating and navigating within VRML worlds. This toolkit made it possible to overcome the current problems with adding alternative input devices for VRML applications.
An application was developed using the toolkit. This application allowed the user to manipulate objects within a VRML world by using combinations of inputs from keyboard, simulated voice commands, a 6 DOF magnetic tracker, and electronic glove. The application shows that the toolkit works as it was designed and that the overall architecture is functional. Benchmark tests show that the architecture places an additional strain on the CPU, and increases the overall lag time of the system, as a function of the hardware configuration (i.e. CPU clock speed, bus cycle speed etc.).7.2 Future Work
There are a number of areas for improving the system. Below I have detailed some of the areas. Hopefully, this discussion will serve as a basis for future work on the interface.
Device Mapper GUI - Navigation
The "Navigation" list only allows for the selection of one device to control navigation. This was done to simplify the implementation. One could change the Device Mapper window to support mapping multiple devices to navigation. For example, in Virtual Reality, a magnetic tracker (e.g. Polhemus) is often used to control the orientation of the camera (or viewpoint), and another device (such as voice input or a button interface) is used to control the speed. This could be easily added. It may be useful to separate the navigation function into speed and orientation and allow for the mapping of different devices to each.
At the moment the only way to select an object to manipulate is via the GUI. It would be interesting to work on other ways to select an object. One obvious method to select an object is to use the mouse to click on the object. The client in this case could send a message back to the IDS to indicate this event. The Device Mapper dialog box could then be updated to reflect the change. In addition to mouse click, other methods such as ray casting could be used to select objects.
At the moment, the entire method of mapping device input to manipulation property is done via the Device Mapper dialog box. There should be work done to identify alternative input methods of mapping device input to manipulation. Some ideas may include:
Flexibility of Mapping
The different degrees of the input devices can not be independently mapped to any manipulation. At the moment, the way in which the input device maps to the manipulation, is predetermined, and hard-coded. In Dave Warner’s (1996) work on the Neat tools, he created an interface in which any degree of freedom of any device could be mapped to any output. In fact any input actuator of any kind could be used to control an output. This same approach would be ideal for my interface. In order to easily test different input mappings to output manipulations, all types of mapping should be supported.
The glove interface provides a means for mapping gesture to command. An extension of this would be to provide gestures that are multi-modal. A multi-modal gesture would result from a combination of different inputs. For example, moving the magnetic tracker in a circular motion and pointing with the glove would translate to the gesture rotate, whereas moving the tracker in a linear motion and pointing with the glove would translate to the gesture move.
It may be useful to include a scripting capability that would allow complex input device mappings to be programmed.
Neural networks can be used to implement gesture recognition for various devices. The glove at the moment uses an algorithm that does not provide a very good gesture recognition success rate. Neural networks could be used to improve the recognition.
At the moment position, scale, and color, are the attributes of the world object that can be modified. In the future more tasks should be added to enhance the flexibility of the system.
A very simple way to isolate the developer from the specifics of the server would be to provide a plug-in architecture for the server. This would be an easy way to add new devices without having to "muck around" in the server code. This would be easy to implement as Java provides dynamic run-time linking.
This same plug-in architecture for input devices could be applied to device mapping. Plug-in "maps" could be created. These maps would change the way in which the input device data is mapped to the required commands.
The architecture/toolkit configurations described within this thesis provide a clean implementation that could have a number of uses beyond just the VRML application domain. A generic device server could be created to work with a number of applications that may need different input devices.