Posts

Showing posts from May, 2013

CDW EDTECH: Visualizing Next-Generation Virtual Reality in Chicago

Image
A Video Feature on CAVE2 by CDW EDTECH. Pretty catchy video!

Omegalib 3.7 released

Image
The omegalib model viewer. Source code at  https://omegalib-apps.googlecode.com/svn/trunk/modelView2/modelView.py . Saturn V Model(c) Arthur Nishimoto This is an intermediate release, before some major planned API additions for 4.0. Major Changes general Improved support for building custom external modules. Additional fixes to runtime application switching. added mission control  -m  flag to the default command line options. (see  the Command Line Wiki page ). added support for  multi-instance  launching (using the  -I  command line option). Now multiple intances of omegalib can easily run on a cluster driving a display system, sharing the same configuration and each controlling a subsection of the display space. omega DisplayConfig.verbose  mode New python functions:  getViewRay(x, y) ,  broadcastCommand() number of lines in the onscreen console can be customized. Stereo eye separation can be customized at runtime through  Camera.setEyeSeparation cyclops Fi

Parallel Beam Tracing and Visualization of 200 Million Sonar Points

Image
For the final project in the UIC Spring 2013 Parallel Processing Class, I worked with a classmate to optimize our current implementation of a sonar beam tracer used for  The NASA ENDURANCE Project . The beam tracer is used to process the raw data collected by the ENDURANCE AUV in Lake Bonney, Antarctica, and correct sound beam paths using information about the water chemistry. The data is clustered and noise filtered to generate a final 3D Point cloud that can be visualized in our CAVE2 system. The ENDURANCE Sonar data in CAVE2. Photo(c) Lance Long. A lot of corrections and adjustments need to be applied to this data before the final results are acceptable. Some of the corrections derive from the sound speed estimation through the water column, AUV navigation corrections, water level changes and noise filtering thresholds. All this parameters influence the final result, and ideally researchers can tweak them and see the result of their actions in real time. Given the size of