Parallel Beam Tracing and Visualization of 200 Million Sonar Points
For the final project in the UIC Spring 2013 Parallel Processing Class, I worked with a classmate to optimize our current implementation of a sonar beam tracer used for The NASA ENDURANCE Project.
The beam tracer is used to process the raw data collected by the ENDURANCE AUV in Lake Bonney, Antarctica, and correct sound beam paths using information about the water chemistry. The data is clustered and noise filtered to generate a final 3D Point cloud that can be visualized in our CAVE2 system.
A lot of corrections and adjustments need to be applied to this data before the final results are acceptable. Some of the corrections derive from the sound speed estimation through the water column, AUV navigation corrections, water level changes and noise filtering thresholds. All this parameters influence the final result, and ideally researchers can tweak them and see the result of their actions in real time.
Given the size of the data (about 200 million unique sonar range returns), reprocessing the data sequentially using the ENDURANCE dttools (https://code.google.com/p/dttools/) takes about an hour.
The objective of our project was to use the computational power of the CAVE2 cluster to bring this time down as much as possible, with minimal changes to the tools themselves.
The basic measurement of most sonar systems is the time it takes a sound pressure wave, emitted from some source, to reach a reflector and return. The observed travel time is converted to distance using an estimate of the speed of sound along the sound waves travel path. The speed of sound in sea water is roughly 1500
meters/second. However the speed of sound can vary with changes in temperature,
salinity and pressure. These variations can drastically change the way sound
travels through water, as changes in sound speed between bodies of water, either
vertically in the water column or horizontally across the ocean, cause the
trajectory of sound waves to bend. Therefore, it is imperative to understand
the environmental conditions in which sonar data is taken.The composite effects of temperature, salinity and pressure are measured in a vertical column of water to provide what is known as the "sound speed profile or SSP . The SSP shown below has been generated using a Chen/Millero sound speed model over real data collected during the ENDURANCE 2009 mission.
The beam tracer is used to process the raw data collected by the ENDURANCE AUV in Lake Bonney, Antarctica, and correct sound beam paths using information about the water chemistry. The data is clustered and noise filtered to generate a final 3D Point cloud that can be visualized in our CAVE2 system.
![]() |
The ENDURANCE Sonar data in CAVE2. Photo(c) Lance Long. |
Given the size of the data (about 200 million unique sonar range returns), reprocessing the data sequentially using the ENDURANCE dttools (https://code.google.com/p/dttools/) takes about an hour.
The objective of our project was to use the computational power of the CAVE2 cluster to bring this time down as much as possible, with minimal changes to the tools themselves.
How Sound Travels Through Water
MPI dttools
- dtrt: the sonar beam tracer, adapted from the MB-System implementation.
- dtmerge: point cloud merger, cleaner, and normal estimator.
We run both tools on the 36-node CAVE2 cluster for different numbers of nodes and measured the speedups compared to the sequential version of the code. The results were very satisfying:
A full reprocessing of the data (beam tracing + merging) went from 1 hour to a little more than four minutes. We were not using the full parallel capacity of CAVE2 at this point (only ~4 cores out of 16 were used on each machine). Further improvements can be made by optimizing a few data access portions of the code, since at this point we are likely hitting a data transfer breakpoint when writing the intermediate and final outputs (these operations are mostly sequential).
Our next step will be integrating the data processor into our visualization tool, to let researchers control the processing pipeline directly within CAVE2.
You can access the dttools source code for mpidtrt and mpidtmerge
here
and here
Comments
Ethical Hacking course in Chennai
Ethical Hacking Training Institute in Chennai
Hacking course in Chennai
ccna Training in Chennai
Salesforce course in Chennai
PHP Training in Chennai
Tally course in Chennai
Ethical Hacking course in OMR
Ethical Hacking course in Anna Nagar
Ethical Hacking course in Vadapalani
RPA Training in Chennai
R Training in Chennai
Automation Anywhere Training in Chennai
RPA Training in Porur
RPA Training in OMR
RPA Training in Adyar
RPA Training in Anna Nagar
It is really interesting. Thanks for sharing the post!
Digital Marketing Training in Chennai | Certification | SEO Training Course | Digital Marketing Training in Bangalore | Certification | SEO Training Course | Digital Marketing Training in Hyderabad | Certification | SEO Training Course | Digital Marketing Training in Coimbatore | Certification | SEO Training Course | Digital Marketing Online Training | Certification | SEO Online Training Course