44-Issue 7
Permanent URI for this collection
Browse
Browsing 44-Issue 7 by Subject "based rendering"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item ClothingTwin: Reconstructing Inner and Outer Layers of Clothing Using 3D Gaussian Splatting(The Eurographics Association and John Wiley & Sons Ltd., 2025) Jung, Munkyung; Lee, Dohae; Lee, In-Kwon; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenWe introduce ClothingTwin, a novel end-to-end framework for reconstructing 3D digital twins of clothing that capture both the outer and inner fabric -without the need for manual mannequin removal. Traditional 2D ''ghost mannequin'' photography techniques remove the mannequin and composite partial inner textures to create images in which the garment appears as if it were worn by a transparent model. However, extending such method to photorealistic 3D Gaussian Splatting (3DGS) is far more challenging. Achieving consistent inner-layer compositing across the large sets of images used for 3DGS optimization quickly becomes impractical if done manually. To address these issues, ClothingTwin introduces three key innovations. First, a specialized image acquisition protocol captures two sets of images for each garment: one worn normally on the mannequin (outer layer exposed) and one worn inside-out (inner layer exposed). This eliminates the need to painstakingly edit out mannequins in thousands of images and provides full coverage of all fabric surfaces. Second, we employ a mesh-guided 3DGS reconstruction for each layer and leverage Non-Rigid Iterative Closest Point (ICP) to align outer and inner point-clouds despite distinct geometries. Third, our enhanced rendering pipeline-featuring mesh-guided back-face culling, back-to-front alpha blending, and recalculated spherical harmonic angles-ensures photorealistic visualization of the combined outer and inner layers without inter-layer artifacts. Experimental evaluations on various garments show that ClothingTwin outperforms conventional 3DGS-based methods, and our ablation study validates the effectiveness of each proposed component.Item DAATSim: Depth-Aware Atmospheric Turbulence Simulation for Fast Image Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2025) Saha, Ripon Kumar; Zhang, Yufan; Ye, Jinwei; Jayasuriya, Suren; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenSimulating the effects of atmospheric turbulence for imaging systems operating over long distances is a significant challenge for optical and computer graphics models. Physically-based ray tracing over kilometers of distance is difficult due to the need to define a spatio-temporal volume of varying refractive index. Even if such a volume can be defined, Monte Carlo rendering approximations for light refraction through the environment would not yield real-time solutions needed for video game engines or online dataset augmentation for machine learning. While existing simulators based on procedurally-generated noise or textures have been proposed in these settings, these simulators often neglect the significant impact of scene depth, leading to unrealistic degradations for scenes with substantial foreground-background separation. This paper introduces a novel, physically-based atmospheric turbulence simulator that explicitly models depth-dependent effects while rendering frames at interactive/near real-time (> 10 FPS) rates for image resolutions up to 1024×1024 (real-time 35 FPS at 256×256 resolution with depth or 512×512 at 33 FPS without depth). Our hybrid approach combines spatially-varying wavefront aberrations using Zernike polynomials with pixel-wise depth modulation of both blur (via Point Spread Function interpolation) and geometric distortion or tilt. Our approach includes a novel fusion technique that integrates complementary strengths of leading monocular depth estimators to generate metrically accurate depth maps with enhanced edge fidelity. DAATSim is implemented efficiently on GPUs using Py- Torch incorporating optimizations like mixed-precision computation and caching to achieve efficient performance. We present quantitative and qualitative validation demonstrating the simulator's physical plausibility for generating turbulent video. DAATSim is made publicly available and open-source to the community: https://github.com/Riponcs/DAATSim.Item Real-Time Per-Garment Virtual Try-On with Temporal Consistency for Loose-Fitting Garments(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wu, Zaiqiang; Shen, I-Chao; Igarashi, Takeo; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenPer-garment virtual try-on methods collect garment-specific datasets and train networks tailored to each garment to achieve superior results. However, these approaches often struggle with loose-fitting garments due to two key limitations: (1) They rely on human body semantic maps to align garments with the body, but these maps become unreliable when body contours are obscured by loose-fitting garments, resulting in degraded outcomes; (2) They train garment synthesis networks on a per-frame basis without utilizing temporal information, leading to noticeable jittering artifacts. To address the first limitation, we propose a two-stage approach for robust semantic map estimation. First, we extract a garment-invariant representation from the raw input image. This representation is then passed through an auxiliary network to estimate the semantic map. This enhances the robustness of semantic map estimation under loose-fitting garments during garment-specific dataset generation. To address the second limitation, we introduce a recurrent garment synthesis framework that incorporates temporal dependencies to improve frame-to-frame coherence while maintaining real-time performance. We conducted qualitative and quantitative evaluations to demonstrate that our method outperforms existing approaches in both image quality and temporal coherence. Ablation studies further validate the effectiveness of the garment-invariant representation and the recurrent synthesis framework.Item WaterGS: Physically-Based Imaging in Gaussian Splatting for Underwater Scene Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2025) , Su Qing Wang; Wu, Wen Bin; Shi, Min; Li, Zhao Xin; Wang, Qi; Zhu, Deng Ming; Christie, Marc; Pietroni, Nico; Wang, Yu-ShuenReconstructing underwater object geometry from multi-view images is a long-standing challenge in computer graphics, primarily due to image degradation caused by underwater scattering, blur, and color shift. These degradations severely impair feature extraction and multi-view consistency. Existing methods typically rely on pre-trained image enhancement models as a preprocessing step, but often struggle with robustness under varying water conditions. To overcome these limitations, we propose WaterGS, a novel framework for underwater surface reconstruction that jointly recovers accurate 3D geometry and restores true object colors. The core of our approach lies in introducing a Physically-Based imaging model into the rendering process of 2D Gaussian Splatting. This enables accurate separation of true object colors from water-induced distortions, thereby facilitating more robust photometric alignment and denser geometric reconstruction across views. Building upon this improved photometric consistency, we further introduce a Gaussian bundle adjustment scheme guided by our physical model to jointly optimize camera poses and geometry, enhancing reconstruction accuracy. Extensive experiments on synthetic and real-world datasets show that WaterGS achieves robust, high-fidelity reconstruction directly from raw underwater images, outperforming prior approaches in both geometric accuracy and visual consistency.