Publications are listed here.
Fire response training in the operation room.
Human-in-the-loop registration of SPIM microscopy volumes.
Virtual exposure simulation of mosquito bites using the Oculus Rift and the Leap. A custom controller was built and placed inside a real mosquito repellent can as a novel input device.
Integrating point cloud representation and rendering into electronic health records.
A quick prototype to look into training, teaching and Vive/Unity integration
Photogrammetry for point cloud reconstruction from image data sets.
Point cloud rendering of large data sets in a browser.
A large tracking space for immersive VR visualizations and training of future nurses.
Compressing time-variant point cloud data by finding and describing similar motion groups over consecutive frames.
Printing on the sides of book.
A small project to visualize usage patterns of the university's high-throughput computer systems in an immersive VR environment -- a 6-sided CAVE.
A participant of 2014's Wisconsin Science Fest virtually skiing inside our six-sided CAVE.
Point cloud rendering of large data sets.
This was a collaborative effort with Ross Smith and I providing the hardware and programming and Dan Harvie and Lorimer Moseley from the Body in Mind lab at UniSA. The idea was to use the Oculus Rift with its very immersive wide field-of-view to investigate where and how pain is learned and memorised (In the muscles or the brain). To do so a method similar to redirected walking techniques is employed: the gain of participant’s head rotation is multiplied by a constant factor and the onset of pain is measured.
My PhD thesis dealt with different rendering techniques for Spatial AR and what has to be changed so they can be implemented successfully in SAR.
Another project involved adapting ray tracing to Spatial AR. I used Nvidia’s Optix framework to do implement a real-time capable ray tracer. Again, the user is tracked using an Optitrack IR tracking system with a rigid body marker attached to a frame of those cheap cinema glasses with the lenses removed. Effects implemented were reflection and refractions.
By using head-tracking, the we can calculate a second, perspective correct, ‘virtual’ view of the scene and using projective texturing painting the surface of the prop. The end effect is similar to having cut holes into the solid surface. One application of this technique is purely virtual details on a physical prop. In the best (and most extreme) case, the physical prop is purely the bounding volume of the virtual object, while everything is displayed using View-dependent rendering techniques.
I tried using radiance volumes to simulate the lighting conditions and influence of different projectors on one object in SAR. If the end result was close enough to reality, it could be used to compensate for projector brightness and used for blending. Unfortunately, it did not work out.
Projected images lack contrast and sharpness/local resolution. We tried to overcome these problems by creating a hybrid combined display by projecting onto an eInk display.
My Diplomarbeit. I implemented a ton of current-gen (at the time) game-engine features in the custom graphcis engine of our Spatial AR framework. Implemented features included HDR rendering, deferred shading, and relief mapping
Demos are great and this study project aimed to differentiate them further than call them 'just' music videos with game engine tech. A focus of this project was to document and classify different types and techniques in demos and therefore enable a common stylistic language (independent of the purely technical terms) to describe and compare different demos.
This was way before easy-to-program Android tablets were common (Android had not even officially been announced). The DS had a wide range of interesting input options, including a homebrew accelerometer. During this project the DS was controlling an OpenSceneGraph application (with physics) over the wifi network. All the software was written using the homebrew devkitpro toolchain.
This was a one-semester study project which resulted in an accepted paper about the method we developed. We were working on Augenblick, a realtime raytracer that matured into a commercial application (Numenus’ RenderGin). The idea of this paper was that we could rasterize 32-bit pointers of the NURBS surfaces and use the rasterized result for a lookup for further (more expensive) NURBS intersection calculatuions.