|
|
Virtual Anatomy Lab | |
1. Selecting and importing spatial data into a virtual anatomy Lab. |
Selection techniques are well on their way. The next step is to import the spatial data into the VAL in
a more dynamic and performance-aware manner. Level of Detail and Texturing techniques should improve
frame rate performance at least 200%. The user should be able to state a minimally acceptable frame rate
and the VAL should respond appropriately to meet that demand (for some minimal system requirements).
Another idea worth further research is intelligent collision detection which, right now, is extremely slow when calculated on each polygon in the mesh and too imprecise when calculated upon the bounding boxes - when using the Java 3D collision detection classes. |
2. Developing an array of customizable interaction techniques for transforming body parts. |
Interactions need to be more explicit in terms of which person is doing what at any point in time.
We can use color or the avatar itself to provide such feedback.
Brainstorming and a review of current VR interaction techniques provides the impetus for developing a wide range of possible interactions with the VAL using available human interface devices such as the mouse and keyboard. The next step will be to focus on two person techniques that emulate typical cadaver team behavior seen in a real cadaver lab. Additional software could allow a user to map his or her own devices to functions in the interaction classes (a long-term goal). |
3. Developing a realistic color and texturing process | As a result of the demo, texturing is a high priority item. All 296 thorax cadaver parts need better coloring and texturing in order to be considered usable by the anatomy student community. Then again, the current lighting model in the VAL uses only two simple directional lights (fast, yet non-optimal for realism). |
4. Creating constraint classes for assembling the cadaver from parts. | Each body part has an appropriate location and orientation (and scale) within the cadaver. Constraint classes will snap a body part in place once a threshold has been reached. For example, once the part is within n millimeters from its correct location and within x radians of its proper orientation. The constraint thresholds should be modifiable by the user so a user can lower the thresholds to provide a greater challenge over time. |
5. Immersive - Desktop Toggles | The current FMS blackboard uses a lot of the 3D canvas' resources. To improve performance, the FMS output could be presented as a separate 2D frame. A user could then toggle between the 2D representation and the 3D (required for full immersion) representation depending on his or her 3D model fidelity requirements. And, the FMS blackboard output needs to be more consistent with anatomy output norms (using an indentation approach to hierarchy). |
6. Subsection Selection | Transparency and Emissive color intensity techniques can be applied to existing anatomy part meshes to allow for interacting with model subsections (parts of). |
7. Closer integration with the Graphics Server. | Many opportunities exist here, but initially a process to request anatomy part meshes directly with LISP function calls can be built to bypass the VRML file creation process or to load anatomy parts where a VRML file does not yet exist. This approach will treat the Graphics Server more like the FM Server -- making queries and getting back results for output in the VAL. Once a mesh is loaded on the cadaver table, it should be able to be exported into VRML. That opens the door for considering the VAL as an authoring environment, something which should not be pursued until all other needed features are in place. |
8. Adding asynchronous features for persistent learning. | The VAL will provide tools for partners to share in learning even when not co-present in the tool at the same time. Messages can be left in the world and interactive dialog threads can be made available for people sharing discussions at different times. |
Virtual Research Cruise | |
1. Creating a Virtual Thompson | The virtual Thompson will be built in 3D from the architectural plans for each deck. Textures will be added from actual pictures taken on a recent cruise. |
2. Creating a Virtual Puget Sound | The virtual Puget Sound will initially be a simple top view texture in which the virtual Thompson will be able to travel. A 3-D topography will be added to define the shoreline. Operational canal features can be added including bridges and locks. |
3. Creating on board controls | The virtual Thompson will include controls for manipulating the ship similar to actual controls the captain uses. Read outs for depth, water surface temperature, and surface salinity based on day of the year will be added. GPS read-outs will show the ship's world location at all times. Other data such as tides could be made available as well. |
4. Adding a virtual CTD | A virtual CTD will be able to be put into the water and submerged to depths. Read outs will simulate the actual read outs on board the Thompson. Virtual water samples taken will be analyzed for temperature and salinity at depth based on day of the year. |
5. Adding a virtual Core Sampler | A virtual Core Sampler will be able to be dropped from the stern of the boat and raised again using a pulley system. Core Samples will be able to be analyzed based on theoretical data interpolated from actual cruise data. |
6. Heads up display | A heads up display will show a god's eye view of the ship from the bridge or a bridge view when the world camera is in god's eye mode. |
Progress Journal | |
Jan 12, 2000 | A new version of Java is in beta (Java 1.3) and available from Sun at http://java.sun.com/products/jdk/1.3/. I tried compiling the existing Virtual Playground code on the Java 1.3 release. Errors were significant. I will wait until the next version of Java 3D is released. For now, I continue with Java 1.2 as the Java platform for the VP. I suspect it will be fully integrated and tested with the Java 1.3 packages. No word on when the next release is due, but the beta API specification is on-line at: http://java.sun.com/products/java-media/3D/1_2_api_beta/. |
Thursday, Jan 13 |
Spent the day integrating the April 1999 demonstration for PRISM into the latest VP base. Open GL is not working properly on the PRISM2 machine in the Ocean Science Building (probably the NT drivers). Texture map does not map properly to the hi-res tile on that machine and machine crashes on occasion when running the VP. Connected the Anatomy Lab blackboard to the Cadaver table such that clicking on a term on the blackboard adds low Level of Detail (cubes) representation of component body parts on the table. Added code to do the reverse (click on a cube to show the component parts on the blackboard) but the Foundation Model Server went down and I was unable to test. Next step is to use the few actual parts we have (Lung, Heart, Esophagus, etc.) such that selecting the term on the blackboard loads the part on the table. |
Tuesday, Jan 18 |
Connected the Foundational Model blackboard to the cadaver table. Now, I can click on or type in a term and see its spatial models appear on the cadaver table (or a simple pyramid for those terms where we have no model). I can also click on the cadaver part and see it appear as the central term in the node-centric Foundational Model blackboard. Next step is to add more structure to the blackboard and add more GUI features for interacting with the cadaver. I heard there are new real-time polygon reduction algorithms at Intel. I will try to find them. Also waiting to hear back from SGI whether they have algorithms they can share with me. |
Tuesday, Jan 25 |
Connected the avatars to the VPServer. I can see another person's representation in the Anatomy Lab. Next step is to network the addition of anatomy body parts. And then to network the movement of the loaded parts by any of the participants. Since we had a similar functionality working in the VRST '98 demonstration, we should be able to network the anatomy parts quickly. An issue is how often during a drag move that we should send body part transform updates. Perhaps we can just send the new transformation and write a routine that moves the part to its new transformation over a fixed amount of time. |
Wednesday, Jan 26 |
Networked the addition of anatomy body parts. Next is to network the movement of the loaded parts by any of the participants. The blackboard needed to be set back along the Z axis for the new Dual 700's graphics system. Still need to network the FM Blackboard so both participants focus on the same central node at a time. |
Tuesday, Feb 1 |
Networked transformation of body parts is working. Update is just once per drag event as recognized by the java.awt.event package. Should consider doing some dead reckoning to see the actual move. Should also attach other participant's hand to the part when it is being transformed. I added a chat frame for networked chat while using the VAL. After I get the transformations done, I will revisit the FM blackboard. Also, tomorrow, I will add outline of the body for better recognition of parts. |
Wednesday, Feb 2 |
I added the outline of the body for better recognition of parts. Problem is that you can't pick an object when it is inside of another object (such as the transparent body I added). So, I made the body outline toggable from the Application menu bar (under the File menu). |
Tuesday, Feb 8 |
I added the skull and the femur as available 3-D body parts. Continued work on networking for multiple participants. Changed the mouse button assignment such that all three axis translation is done with the right mouse button. X and Y axes are done without any key assistance. Z translation requires that the SHIFT key be held down simultaneously. Rotation is done via the middle mouse button with simultaneous SHIFT key depress. |
T-W, Feb 15-16 |
I worked on the demonstration version of the Virtual Anatomy Lab. Added more textures. Hooked up the Examine GUI for moving body part objects in 3D (to overcome users without a 3 button mouse). Eliminated test code to avoid confusion. Reset the Collision Manager for the fact we now have a smaller room (per CR's request). |
Tuesday, Feb 22 |
I continued working on the demonstration version of the Virtual Anatomy Lab. Sent the executables to Natasha. Wrote a script available on-line at: http://www.hitl.washington.edu/people/bdc/forNat/demo.html. Will add more labels and more interesting FMS navigation on my end. I may even have time to show how to subdivide something if I have time. |
Wednesday, Feb 23 |
I ran the demo in the weekly ASA meeting. Natasha could not participate due to a corrupt vrml97.jar file (which we didn't deduce until after the demo). Jakob filled in brilliantly. I updated the VAL to do list above as a result of the feedback. |
T-W, 2/29-3/1 |
Worked on a plan to get a Linux box installed in H227. Reviewed the latest Java 3D port from blackdown.org. Tested the Java 1.3 and Java 3D 1.1.3 releases further. Both seem to work fine with the production VAL code. Ran a tutorial session with Natasha to review the VAL. |
Tuesday, Mar 7 |
Downloaded a C++ polygon reduction program from Stan Melax at the University of Alberta Computer Sciences Department. Converted the Program to Java and Debugged it. It works. I added the ability to weight the reduction based on relative edge length and curvature cost estimates (the program was just a 50/50 mix). I also the added the ability to read a single IndexedFaceSet node .wrl file as input. Run the program with syntax: java LODModeler 'input_filename' 50 1.2 'output_filename' where the input_filename and output_filenames are .wrl files, the 50 represents the vertex reduction percentage, and the 1.2 represents the edge length weighting (v. curvature weighting). The program runs fine with all Stan's example models but has problems with the anatomy ones. I need to enhance the algorithm for conditions in the anatomy data, but I am pleased with the overall quality of the approach. |
Wednesday, Mar 8 |
Began the build of distal, a Red Hat 6.1 linux box. Used a CD-ROM to install and then installed the Bastille rpm to provide tighter security for box access. Installed the latest Java 1.2.2 and Java 3D port for linux from blackdown.org. Installed Mesa in order to get OpenGL support on distal. I compiled with make linux-i386 and copied the necessary .so file(s) into the Java 3D file hierarchy (based on where Java 3D was asking for them when I ran the VAL). |
T-W, Mar 14-15 |
Investigated dynamic texturing routines and read up on the QSlim polygon reduction approach. Set up distal for remote running of skandha4 programs on pollex. To get it working, I had to:
scp -C * pollex.biostr.washington.edu:~bdc/scripts I spent the rest of the time running scripts and investigating the skandha4 structure.lsp, structure-interface.lsp and scene-generator.lsp files and re-running our skandha4 tutorial. |
T-W, Mar 21-22 |
I spent the week ramping up on skandha4, checking out the skandha4 source code from CVS, running the tutorials, and getting skandha4 to run locally on distal. Even the graphics-server can be run on distal now, thanks to help from Evan and Andrew. I put together the right commands to create VRML files from the body parts. Will actually create the .wrl files next week. One concern is the fact that I have to explicitly define normals to see faceted geometry appear in the skandha4 output window. I need to get the facets visible before I can test the texture coordinates which are probably not going to work. Texturing has not been a high priority in the past. |
W-F, Mar 29-31 |
I continued ramping up on skandha4, being very stubborn to figure out texture transforms on triangles. I finally got it on Friday and created this tutorial. Turns out I don't need normals if I can live with the default skandha normals which are perpendicular to the XY plane (Z axis). I also better understand how to call functions in external .lsp files now which helps our code be more modular. I was able to texture map a lung in 3D Studio Max and bring the model into the Virtual Anatomy Lab. I used a default UV mapping without treating each polygon uniquely. The model looked OK and performance was not hampered at all (as we expected since we are polygon bound right now). I am still struggling on creating a process to (assembly line style) create levels of detail that map exact texture maps onto the anatomy structures. But, I remain very hopeful (in fact sure) that we'll eventually get there in a very impressive manner.
|
W-F, Apr 5-7 |
I continued ramping up on skandha4, cleaning up the tutorials and checking them back into cvs. Have gained a confidence that most OpenGL features I am used to through the Java 3D API are also available through skandha4. The week was spent reading a wide range of material on linux and xlisp as well. |
W-F, Apr 12-14 |
I started downloading the other thoracic 3D models in VRML format. I am not saving any materials or lights with each file. Those should be added systematically, similar to the scene generator. I did very little with textures, getting sidetracked on small stuff instead. CR's meeting for next week is cancelled anyway. |
W-F, Apr 19-21 |
I finished downloading the other thoracic 3D models in VRML format. I have not downloaded the parts of the lung or each vertebrae separately. I can use the arteries to try out interaction ideas with each user. I also investigated the Java 2D package for the first time. The demonstrations that come from Sun are very impressive, but for some reason, I can't figure out how to make a BufferedImage's background transparent. They appear transparent in the demos, but not in my test puzzle application. I can make the image background transparent or opaque with any color and so there should be no problem with the actual puzzle piece backgrounds. Hopefully, I will find an answer to this quickly. Oh, and I like Jim's thought of focusing the skandha4 Open Source effort on the graphics server only.
|
W-F, Apr 26-28 |
I investigated the skandha4 src modules closer based on our Monday tech meeting discussion with Evan and Jim. The organization of the packages makes sense, but the C itself is a bit overwhelming. Scarier yet is the thought of garbage collector generated bugs. I found out that I can't make a BufferedImage transparent. But, the issue ended up being moot when I realized I wanted all the objects to be drawn on the same BufferedImage in the first place. Marcel and I redid the code to draw all body part images on the same Buffered Image. We had to redo the rotation calls to rotate first and then draw on the 2D graphics buffer as opposed to rotating the buffer itself.
|
W-F, May 3-5 |
The week was a very hectic one with our 19th Virtual Worlds Consortium at the HIT Lab. I video taped all the presenters and hosted the Sun representatives. We discussed the future of Java 3D. Seems the Sun folk think Java 3D is getting very popular, but they continue to consider the distribution to be unimportant at this time. They hope that Netscape 6 will make Java 3D Applets easier to deliver to users, but stake their interests in writing Java 3D Applications, not Applets. Java 3D should take a single programmer-month to get it running on the new Mac OSX (OS 10) Operating System. But, there are other hurdles. Perhaps the Quick Time and other Media Groups may balk at Java 3D support. Sun believes that will be overcome within the next year and Java 3D applications will run well on Mac systems. Next week I need to dive into all the demos on the DA internal pages and plan out how to get them all running to show off our progress to the grant reviewers. Lots of big picture thinking ahead in order to apply my time wisely to the group's needs.
|
W-F, May 10-12 |
Marcel finished his work on the DAClient Puzzle. I reviewed his work and we got the rotation working with a simplier interface. Per Jim, he will work on connecting Jakob and Therese's code to all four atlases next. I am having problems with BICEPS. Somehow, I can't access my home directory from her nor can I run Evan's Scene Viewer/Generator from his home page at tela/~ema. Bill Barker will help me get BICEPS working as well as it was before the name change. |
W-F, May 17-19 |
I am adding the anatomy 3-D mesh VRML files to the Virtual Anatomy Lab over time, testing each model as I go. Hopefully, a wider variety of 3D structures will make the VAL look more impressive. I need to focus on the current DA demonstrations for a while and get them to a point where interested parties can run them over the Web. The grant review committee might like to see working examples of what we've been up to. I tried to get WinCVS working without success. Darren tried to help me create the RSA key I need but it's not working properly. I started the chair purchasing fiasco. |
5/24 & 6/2 |
Spent the time with Kevin and Evan looking at their code bases: Non-photorealistic rendering techniques, scene generator GUIs, and multi-threaded rendering. I will wait until after they have checked in their code to CVS before I consider integrating their code bases into our production system (skandha). I'm took a nine day vacation, reviewing Marcel's progress on the DAClient upon my return. The DAClient changes will be ready for check-in to CVS once we can get access the images from a back-end database. Also got BICEPS connected to my home directory again. The chairs were delivered to H227 in my absence. |
W-F, June 7-9 |
Spent most of the week creating a Software Map to map our existing DA software projects into a 3D cube. The cube axes are: Data Storage, Interaction, and Rendering and each axis is a continuum from pure client processing to pure server processing. The idea is to see how our software currently maps and to discuss the implications. When considering a fuzzy cube, I would think our projects would move toward the center of the cube as they got more intelligent (intelligence being fuzzy?). The rest of the week I spent reviewing Evan's Scene Viewer and Scene Generator to make sure I understood the code. I won't make any changes until Evan has his checked into CVS. I think I will start by simplifying the interface depending on user task and have the interface better track where the user is in each process. I have not worked on the VAL for a month now except to add more thoracic models a little at a time (I am about half way done). I need to start playing with Microsoft's VWorlds platform. I will be there for a week of bootcamp June 26-30 and will want to get something impressive accomplished. I am leaning towards getting plants to grow in there, but may get a better brainstorm over the next two weeks by working their code base. |
T-F, June 13-16 |
Got the DAClient working with the other three atlases. Sent the code to the Swedes for their review. Once I hear back from them, I will put the code into CVS. The Knee is problematic in that the frame_table.txt file has no high, second-level term. Without it, the JFrame opens with the images expanded. Since it takes a while to get the images and render them properly, the interface appears to stick for the user. So, I have commented out the Knee for now. Ideally, I could capture the second-level and artificially create a third level before continuing to load the atlas. Jim and I discussed my priorities for the summer. Evan's Graphics Server is top priority as well as getting an abstract in to MMVR2001. |
T-F, June 20-23 |
Worked with Jim on the Grant Renewal illustrations. Created a simple DA Demonstration Page to be improved for grant inclusion. Put a new version of the DAClient on-line after finding a problem with the previous compilation. I agreed to make the VAL render as similarly as possible to skandha4. Since they both call Open GL directly, the renders should be able to be identical. Colors and lighting is the place to start as well as putting back the hi-res models of the Aorta, Lung, Spine, and Heart. |
M-F, June 26-30 |
At Microsoft for the week working with their VWorlds software. |
M-R, July 3-6 |
Checked Evan's code into CVS after cleaning out unnecessary files. Found out why the Puzzle wasn't working correctly with the hybrid code. Will need to merge with Jakob's original code if we want to still run it in the Java 1.1.5 Virtual Machine (and thus the most popular Web browsers). Will wait for WinCVS to work before merging since want to merge on a clean build. Will also want to include the Java 1.2 version for the Java2D incorporation. Java2D is definitely the way to go in the future. Would be great now if the browsers could handle the Java 1.2 executables. Must return to the priority list next week. Work on getting the same colors and lights into the VAL (as the scene-generator) and then working on a Java front end for the scene viewer. |
M-R, July 10-13 |
Met with the new summer intern, Elizabeth Walter, who asked that I give her a tour of all the work that has been done in the DA group. I showed her the Demos page and ran her through the work Evan has done on the scene generator and the work the Swedes had done while here. Seems Elizabeth's work will branch out from both those previous projects. Darren and I got close to getting WinCVS working on the NT boxes. Still a problem with a script that needs to be run to handle the RSA encryption of the pass phrase. Darren is looking into finding a better script. I spent time learning JBuilder and getting it to run on distal. Seems it is very similar to Borland's C and C++ IDE's of yesteryear. |
M-R, July 17-20 |
I finished the submission abstract for MedVR 2001 next January. I received a receipt from Jim Westwood (jim ![]() I copied Marcel's work over to distal and started updating the CVS copy that I checked out from the Swedes' work. Should take me a couple of days to make sure it is all OK and ready for update to CVS. Still would like to get WinCVS working though to make the process easier from the NT side. |
M-R, July 24-27 |
Spent the week at SIGGRAPH 2000 in New Orleans. A very busy week of wearing my HIT Lab ambassador cap, but not too much accomplished on behalf of the DA group. The HITL Magic Book did well for a second straight year, getting some press in the US and lots of press in Asia. I did attend a BOF meeting of the NLM. Learned about their INSIGHT software project and the fact that they were more interested in funding registration and segmentation proposals than interface proposals for the next eighteen months. The food was good but too many Texans (UHouston, Texas, and Texas A&M) for my liking. UColorado guys seemed very smug and the speech by the NLM rep had typical government overtones. Met a professor from Taiwan that I knew from VRST '98. |
T-F, August 1-4 |
Attended a review meeting of Evan's Scene Viewer and Scene Generator work. Set up a mirror of his work in my own subdirectory path at: /usr/people/bdc/cgi-bin and have begun to consider changes requested by Sara and CR. Very interested in using Java as a front end and worked with Elizabeth to make sure we could drive skandha4 images from a Java swing front-end. We were successful after a couple of days of tweaking the code. Jim and I met often to discuss changes in funding for September 1st. I may have to cut my hours to a 20% support level if other funding doesn't become available to Jim. |
M-R, August 7-10 |
Met with Sara to review the Scene Viewer, Scene Manager, and Scene Generator (the forms based versions). Agreed to these titles as standard names and agreed that the Scene Viewer should remain forms based until Java 2 is incorporated into the standard Web browsers (Netscape and IE). The Scene Generator and Scene Manager are ripe for Java front-ends since they will be used primarily by in-house users for a while. Jim and I spoke about alternative funding sources for me to maintain a 40% funding rate from the Biostructure group. I may do some work on CR's behalf and wait for him to contact me with further information (Protege' plug-in work perhaps?). |
M-R, August 14-17 |
Rebuilt what was pollex.biostr as distal.biostr to get distal on the network as a standard debian installation. Glad to have a standard box though it required a bit of tweaking to get the Microsoft Intellimouse working as a three-button mouse and to get copy and paste working (the answer was to change the /etc/X11/XF86Config file so the pointer section included: Protocol "IMPS/2"
). We also set up tcsh as my default shell so I could review my previous commands. I continued work on the Java-based front end to the Scene Generator, adding buttons to do output transformations (rotation, zoom and reset). Next, I need to make changes to the cgi code to incorporate changes suggested by Sara. |
T-F, August 22-25 |
Worked with Elizabeth Walter to finish her summer contribution and include it in the CVS repository. Her work is in http://sig.biostr.washington.edu/cgi-bin/cvsweb/src/atlas/client/java_swing/ExplorerStart/ with the same parent directory as the Swedes' Image Retrieval and DAClient work. Continued conversations with Jim as to a potential journal paper for the 3D Scene Generator. |
M-R, September 4-7 |
Monday was the Memorial Day holiday. I attended the OWorlds 2000 Architecture Summit in Boulder Creek,California where we discussed and documented approaches for a connected 3D cyberspace across clients and servers (supported by corporate partners). The OWorlds community would create the OWorlds kernel. |
September 10-22 |
Summer vacation in the Palouse, Oregon, and the Redwoods. Received word that we can submit a paper to go with the MedVR poster session. Paper Guidelines are here. |
M-R, September 25-28 |
Spent the week defining a project that I could focus on and implement by the end of the year 2000. Suggested a Skandha GUI Componentization Project whereby generation of Skandha GUIs could be created easier through an encapsuation and service architecture. Darren mentioned looking into Java Beans as a model for such an architecture. |
M-R, October 2-5 |
Began reading up on Java Beans on Sun's JavaBeans Component APIs for Java document and doing some of the exercises to see how Beans development goes. |
M-R, October 9-12 |
Continued doing the JavaBeans exercises until I thought I understood the general principles. Started diagraming a possible architecture for implementation of the Skandha GUI Componentization Project. Started putting together the MedVR Paper on the Virtual Anatomy Lab. |
M-R, October 16-19 |
Created base classes for the Skandha GUI Componentization Project and implemented the RotateSceneClockwise class as an example use of the architecture. Need to spend more time on scene save and retrieval. Finished the MedVR Paper on the Virtual Anatomy Lab and sent it off to the program committee chair. |
-- end -- |