DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering

dc.contributor.authorSaha, Ripon Kumaren_US
dc.contributor.authorZhang, Yufanen_US
dc.contributor.authorYe, Jinweien_US
dc.contributor.authorJayasuriya, Surenen_US
dc.contributor.editorChristie, Marcen_US
dc.contributor.editorPietroni, Nicoen_US
dc.contributor.editorWang, Yu-Shuenen_US
dc.date.accessioned2025-10-07T05:02:01Z
dc.date.available2025-10-07T05:02:01Z
dc.date.issued2025
dc.description.abstractSimulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (> 10 FPS) rates for image resolutions up to 1024×1024 (real-time 35 FPS at 256×256 resolution with depth or 512×512 at 33 FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py- Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAATSim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim.en_US
dc.description.number7
dc.description.sectionheadersImage Creation & Augmentation
dc.description.seriesinformationComputer Graphics Forum
dc.description.volume44
dc.identifier.doi10.1111/cgf.70241
dc.identifier.issn1467-8659
dc.identifier.pages10 pages
dc.identifier.urihttps://doi.org/10.1111/cgf.70241
dc.identifier.urihttps://diglib.eg.org/handle/10.1111/cgf70241
dc.publisherThe Eurographics Association and John Wiley & Sons Ltd.en_US
dc.subjectCCS Concepts: Computing methodologies → Computational photography; Image-based rendering
dc.subjectComputing methodologies → Computational photography
dc.subjectImage
dc.subjectbased rendering
dc.titleDAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Renderingen_US
Files
Original bundle
Now showing 1 - 2 of 2
Loading...
Thumbnail Image
Name:
cgf70241.pdf
Size:
6.99 MB
Format:
Adobe Portable Document Format
No Thumbnail Available
Name:
paper1227_mm1.zip
Size:
103.77 MB
Format:
Zip file
Collections