DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering
| dc.contributor.author | Saha, Ripon Kumar | en_US |
| dc.contributor.author | Zhang, Yufan | en_US |
| dc.contributor.author | Ye, Jinwei | en_US |
| dc.contributor.author | Jayasuriya, Suren | en_US |
| dc.contributor.editor | Christie, Marc | en_US |
| dc.contributor.editor | Pietroni, Nico | en_US |
| dc.contributor.editor | Wang, Yu-Shuen | en_US |
| dc.date.accessioned | 2025-10-07T05:02:01Z | |
| dc.date.available | 2025-10-07T05:02:01Z | |
| dc.date.issued | 2025 | |
| dc.description.abstract | Simulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (> 10 FPS) rates for image resolutions up to 1024×1024 (real-time 35 FPS at 256×256 resolution with depth or 512×512 at 33 FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py- Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAATSim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim. | en_US |
| dc.description.number | 7 | |
| dc.description.sectionheaders | Image Creation & Augmentation | |
| dc.description.seriesinformation | Computer Graphics Forum | |
| dc.description.volume | 44 | |
| dc.identifier.doi | 10.1111/cgf.70241 | |
| dc.identifier.issn | 1467-8659 | |
| dc.identifier.pages | 10 pages | |
| dc.identifier.uri | https://doi.org/10.1111/cgf.70241 | |
| dc.identifier.uri | https://diglib.eg.org/handle/10.1111/cgf70241 | |
| dc.publisher | The Eurographics Association and John Wiley & Sons Ltd. | en_US |
| dc.subject | CCS Concepts: Computing methodologies → Computational photography; Image-based rendering | |
| dc.subject | Computing methodologies → Computational photography | |
| dc.subject | Image | |
| dc.subject | based rendering | |
| dc.title | DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering | en_US |