44-Issue 4
Permanent URI for this collection
Browse
Browsing 44-Issue 4 by Issue Date
Now showing 1 - 20 of 23
Results Per Page
Sort Options
Item Perceived Quality of BRDF Models(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kavoosighafi, Behnaz; Mantiuk, Rafal K.; Hajisharif, Saghi; Miandji, Ehsan; Unger, Jonas; Wang, Beibei; Wilkie, AlexanderMaterial appearance is commonly modeled with the Bidirectional Reflectance Distribution Functions (BRDFs), which need to trade accuracy for complexity and storage cost. To investigate the current practices of BRDF modeling, we collect the first high dynamic range stereoscopic video dataset that captures the perceived quality degradation with respect to a number of parametric and non-parametric BRDF models. Our dataset shows that the current loss functions used to fit BRDF models, such as mean-squared error of logarithmic reflectance values, correlate poorly with the perceived quality of materials in rendered videos. We further show that quality metrics that compare rendered material samples give a significantly higher correlation with subjective quality judgments, and a simple Euclidean distance in the ITP color space (DEITP) shows the highest correlation. Additionally, we investigate the use of different BRDF-space metrics as loss functions for fitting BRDF models and find that logarithmic mapping is the most effective approach for BRDF-space loss functions.Item A Data-Driven Approach to Analytical Dwivedi Guiding(The Eurographics Association and John Wiley & Sons Ltd., 2025) Gouder, Darryl; Vorba, Jirí; Droske, Marc; Wilkie, Alexander; Wang, Beibei; Wilkie, AlexanderPath tracing remains the gold standard for high-fidelity subsurface scattering despite requiring numerous paths for noisefree estimates. We introduce a novel variance-reduction method based on two complementary zero-variance-theory-based approaches. The first one, analytical Dwivedi sampling, is lightweight but struggles with complex lighting. The second one, surface path guiding, learns incident illumination at boundaries to guide sampled paths, but it does not reduce variance from subsurface scattering. In our novel method, we enhance Dwivedi sampling by incorporating the radiance field learned only at the volume boundary. We use the average normal of points on an illuminated boundary region or directions sampled from distributions of incident light at the boundary as our analytical Dwivedi slab normals. Unlike previous methods based on Dwivedi sampling, our method is efficient even in scenes with complex light rigs typical for movie production and under indirect illumination. We achieve comparable noise reduction and even slightly improved estimates in some scenes compared to volume path guiding, and our method can be easily added on top of any existing surface path guiding system. Our method is particularly effective for homogeneous, isotropic media, bypassing the extensive training and caching inside the 3D volume that volume path guiding requires.Item Real-time Image-based Lighting of Glints(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kneiphof, Tom; Klein, Reinhard; Wang, Beibei; Wilkie, AlexanderImage-based lighting is a widely used technique to reproduce shading under real-world lighting conditions, especially in realtime rendering applications. A particularly challenging scenario involves materials exhibiting a sparkling or glittering appearance, caused by discrete microfacets scattered across their surface. In this paper, we propose an efficient approximation for image-based lighting of glints, enabling fully dynamic material properties and environment maps. Our novel approach is grounded in real-time glint rendering under area light illumination and employs standard environment map filtering techniques. Crucially, our environment map filtering process is sufficiently fast to be executed on a per-frame basis. Our method assumes that the environment map is partitioned into few homogeneous regions of constant radiance. By filtering the corresponding indicator functions with the normal distribution function, we obtain the probabilities for individual microfacets to reflect light from each region. During shading, these probabilities are utilized to hierarchically sample a multinomial distribution, facilitated by our novel dual-gated Gaussian approximation of binomial distributions. We validate that our real-time approximation is close to ground-truth renderings for a range of material properties and lighting conditions, and demonstrate robust and stable performance, with little overhead over rendering glints from a single directional light. Compared to rendering smooth materials without glints, our approach requires twice as much memory to store the prefiltered environment map.Item Real-time Level-of-detail Strand-based Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2025) Huang, Tao; Zhou, Yang; Lin, Daqi; Zhu, Junqiu; Yan, Ling-Qi; Wu, Kui; Wang, Beibei; Wilkie, AlexanderWe present a real-time strand-based rendering framework that ensures seamless transitions between different level-of-detail (LoD) while maintaining a consistent appearance. We first introduce an aggregated BCSDF model to accurately capture both single and multiple scattering within the cluster for hairs and fibers. Building upon this, we further introduce a LoD framework for hair rendering that dynamically, adaptively, and independently replaces clusters of individual hairs with thick strands based on their projected screen widths. Through tests on diverse hairstyles with various hair colors and animation, as well as knit patches, our framework closely replicates the appearance of multiple-scattered full geometries at various viewing distances, achieving up to a 13× speedup.Item MatSwap: Light-aware Material Transfers in Images(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lopes, Ivan; Deschaintre, Valentin; Hold-Geoffroy, Yannick; Charette, Raoul de; Wang, Beibei; Wilkie, AlexanderWe present MatSwap, a method to transfer materials to designated surfaces in an image realistically. Such a task is non-trivial due to the large entanglement of material appearance, geometry, and lighting in a photograph. In the literature, material editing methods typically rely on either cumbersome text engineering or extensive manual annotations requiring artist knowledge and 3D scene properties that are impractical to obtain. In contrast, we propose to directly learn the relationship between the input material-as observed on a flat surface-and its appearance within the scene, without the need for explicit UV mapping. To achieve this, we rely on a custom light- and geometry-aware diffusion model. We fine-tune a large-scale pre-trained text-toimage model for material transfer using our synthetic dataset, preserving its strong priors to ensure effective generalization to real images. As a result, our method seamlessly integrates a desired material into the target location in the photograph while retaining the identity of the scene. MatSwap is evaluated on synthetic and real images showing that it compares favorably to recent works. Our code and data are made publicly available on https://github.com/astra-vision/MatSwapItem Wavelet Representation and Sampling of Complex Luminaires(The Eurographics Association and John Wiley & Sons Ltd., 2025) Atanasov, Asen; Koylazov, Vladimir; Wang, Beibei; Wilkie, AlexanderWe contribute a technique for rendering the illumination of complex luminaires based on wavelet-compressed light fields while the direct appearance of the luminaire is handled with previous techniques. During a brief photon tracing phase, we precompute the radiance field of the luminaire. Then, we employ a compression scheme which is designed to facilitate fast per-ray run-time reconstructions of the field and importance sampling. To treat aliasing, we propose a two-component filtering solution: a 4D Gaussian filter during the pre-computation stage and a 4D stochastic Gaussian filter during rendering. We have developed an importance sampling strategy based on providing an initial guess from low-resolution and low-memory viewpoint samplers that is subsequently refined by a hierarchical process over the wavelet frequency bands. Our technique is straightforward to integrate in rendering systems and has all the features that make it practical for production renderers - MIS compatibility, brief pre-computation, low memory requirements, and efficient field evaluation and importance sampling.Item Real-Time Importance Deep Shadows Maps with Hardware Ray Tracing(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kern, René; Brüll, Felix; Grosch, Thorsten; Wang, Beibei; Wilkie, AlexanderRendering shadows for semi-transparent objects like smoke significantly enhances the realism of the final image. With advancements in ray tracing hardware, tracing visibility rays in real time has become possible. However, generating shadows for semi-transparent objects requires evaluating multiple or all intersections along the ray, resulting in a deep shadow ray. Deep Shadow Maps (DSM) offer an alternative but are constrained by their fixed resolution. We introduce Importance Deep Shadow Maps (IDSM), a real-time algorithm that adaptively distributes Deep Shadow samples based on importance captured from the current camera viewport. Additionally, we propose a novel DSM data structure built on the ray tracing acceleration structure, improving performance for scenarios requiring many samples per DSM texel. Our IDSM approach achieves speedups of up to ×6.89 compared to hardware ray tracing while maintaining a nearly indistinguishable quality level.Item SPaGS: Fast and Accurate 3D Gaussian Splatting for Spherical Panoramas(The Eurographics Association and John Wiley & Sons Ltd., 2025) Li, Junbo; Hahlbohm, Florian; Scholz, Timon; Eisemann, Martin; Tauscher, Jan-Philipp; Magnor, Marcus; Wang, Beibei; Wilkie, AlexanderIn this paper we propose SPaGS, a high-quality, real-time free-viewpoint rendering approach from 360-degree panoramic images. While existing methods building on Neural Radiance Fields or 3D Gaussian Splatting have difficulties to achieve real-time frame rates and high-quality results at the same time, SPaGS combines the advantages of an explicit 3D Gaussian-based scene representation and ray casting-based rendering to attain fast and accurate results. Central to our new approach is the exact calculation of axis-aligned bounding boxes for spherical images that significantly accelerates omnidirectional ray casting of 3D Gaussians. We also present a new dataset consisting of ten real-world scenes recorded with a drone that incorporates both calibrated 360-degree panoramic images as well as perspective images captured simultaneously, i.e., with the same flight trajectory. Our evaluation on this new dataset as well as established benchmarks demonstrates that SPaGS excels over state-of-the-art methods in terms of both rendering quality and speed.Item Multiview Geometric Regularization of Gaussian Splatting for Accurate Radiance Fields(The Eurographics Association and John Wiley & Sons Ltd., 2025) Kim, Jungeon; Park, Geonsoo; Lee, Seungyong; Wang, Beibei; Wilkie, AlexanderRecent methods, such as 2D Gaussian Splatting and Gaussian Opacity Fields, have aimed to address the geometric inaccuracies of 3D Gaussian Splatting while retaining its superior rendering quality. However, these approaches still struggle to reconstruct smooth and reliable geometry, particularly in scenes with significant color variation across viewpoints, due to their per-point appearance modeling and single-view optimization constraints. In this paper, we propose an effective multiview geometric regularization strategy that integrates multiview stereo (MVS) depth, RGB, and normal constraints into Gaussian Splatting initialization and optimization. Our key insight is the complementary relationship between MVS-derived depth points and Gaussian Splatting-optimized positions: MVS robustly estimates geometry in regions of high color variation through local patch-based matching and epipolar constraints, whereas Gaussian Splatting provides more reliable and less noisy depth estimates near object boundaries and regions with lower color variation. To leverage this insight, we introduce a median depthbased multiview relative depth loss with uncertainty estimation, effectively integrating MVS depth information into Gaussian Splatting optimization. We also propose an MVS-guided Gaussian Splatting initialization to avoid Gaussians falling into suboptimal positions. Extensive experiments validate that our approach successfully combines these strengths, enhancing both geometric accuracy and rendering quality across diverse indoor and outdoor scenes.Item StructuReiser: A Structure-preserving Video Stylization Method(The Eurographics Association and John Wiley & Sons Ltd., 2025) Spetlik, Radim; Futschik, David; Sýkora, Daniel; Wang, Beibei; Wilkie, AlexanderWe introduce StructuReiser, a novel video-to-video translation method that transforms input videos into stylized sequences using a set of user-provided keyframes. Unlike most existing methods, StructuReiser strictly adheres to the structural elements of the target video, preserving the original identity while seamlessly applying the desired stylistic transformations. This provides a level of control and consistency that is challenging to achieve with text-driven or keyframe-based approaches, including large video models. Furthermore, StructuReiser supports real-time inference on standard graphics hardware as well as custom keyframe editing, enabling interactive applications and expanding possibilities for creative expression and video manipulation.Item A Wave-optics BSDF for Correlated Scatterers(The Eurographics Association and John Wiley & Sons Ltd., 2025) Yang, Ruomai; Kim, Juhyeon; Pediredla, Adithya; Jarosz, Wojciech; Wang, Beibei; Wilkie, AlexanderWe present a wave-optics-based BSDF for simulating the corona effect observed when viewing strong light sources through materials such as certain fabrics or glass surfaces with condensation. These visual phenomena arise from the interference of diffraction patterns caused by correlated, disordered arrangements of droplets or pores. Our method leverages the pair correlation function (PCF) to decouple the spatial relationships between scatterers from the diffraction behavior of individual scatterers. This two-level decomposition allows us to derive a physically based BSDF that provides explicit control over both scatterer shape and spatial correlation. We also introduce a practical importance sampling strategy for integrating our BSDF within a Monte Carlo renderer. Our simulation results and real-world comparisons demonstrate that the method can reliably reproduce the characteristics of the corona effects in various real-world diffractive materials.Item Controllable Biophysical Human Faces(The Eurographics Association and John Wiley & Sons Ltd., 2025) Liu, Minghao; Grabli, Stephane; Speierer, Sébastien; Sarafianos, Nikolaos; Bode, Lukas; Chiang, Matt; Hery, Christophe; Davis, James; Aliaga, Carlos; Wang, Beibei; Wilkie, AlexanderWe present a novel generative model that synthesizes photorealistic, biophysically plausible faces by capturing the intricate relationships between facial geometry and biophysical attributes. Our approach models facial appearance in a biophysically grounded manner, allowing for the editing of both high-level attributes such as age and gender, as well as low-level biophysical properties such as melanin level and blood content. This enables continuous modeling of physical skin properties that correlate changes in skin properties with shape changes. We showcase the capabilities of our framework beyond its role as a generative model through two practical applications: editing the texture maps of 3D faces that have already been captured, and serving as a strong prior for face reconstruction when combined with differentiable rendering. Our model allows for the creation of physically-based relightable, editable faces with consistent topology and uv layout that can be integrated into traditional computer graphics pipelines.Item Differentiable Search Based Halftoning(The Eurographics Association and John Wiley & Sons Ltd., 2025) Luci, Emiliano; Wijaya, Kevin Tirta; Babaei, Vahid; Wang, Beibei; Wilkie, AlexanderHalftoning is fundamental to image reproduction on devices with a limited set of output levels, such as printers. Halftoning algorithms reproduce continuous-tone images by distributing dots with a fixed tone but variable size or spacing. Search-based approaches optimize for a dot distribution that minimizes a given visual loss function w.r.t. an input image. This class of methods is not only the most intuitive and versatile but can also yield the highest quality results depending on the merit of the employed loss function. However, their combinatorial nature makes them computationally inefficient. We introduce the first differentiable search-based halftoning algorithm. Our proposed method can be natively used to perform multi-color, multi-level halftoning. Our main insight lies in introducing a relaxation in the discrete choice of dot assignment during the backward pass of the optimization. We achieve this by associating a fictitious distance from the image plane to each dot, embedding the problem in three dimensions. We also introduce a novel loss component that operates in the frequency domain and provides a better visual loss when combined with existing image similarity metrics. We validate our approach by demonstrating that it outperforms stochastic optimization methods in both speed and objective value, while also scaling significantly better to large images. The code is available at https:gitlab.mpi-klsb.mpg.de/aidam-public/differentiable-halftoningItem VideoMat: Extracting PBR Materials from Video Diffusion Models(The Eurographics Association and John Wiley & Sons Ltd., 2025) Munkberg, Jacob; Wang, Zian; Liang, Ruofan; Shen, Tianchang; Hasselgren, Jon; Wang, Beibei; Wilkie, AlexanderWe leverage finetuned video diffusion models, intrinsic decomposition of videos, and physically-based differentiable rendering to generate high quality materials for 3D models given a text prompt or a single image. We condition a video diffusion model to respect the input geometry and lighting condition. This model produces multiple views of a given 3D model with coherent material properties. Secondly, we use a recent model to extract intrinsics (base color, roughness, metallic) from the generated video. Finally, we use the intrinsics alongside the generated video in a differentiable path tracer to robustly extract PBR materials directly compatible with common content creation tools.Item Artist-Inator: Text-based, Gloss-aware Non-photorealistic Stylization(The Eurographics Association and John Wiley & Sons Ltd., 2025) Subias, Jose Daniel; Daniel-Soriano, Saúl; Gutierrez, Diego; Serrano, Ana; Wang, Beibei; Wilkie, AlexanderLarge diffusion models have made a remarkable leap synthesizing high-quality artistic images from text descriptions. However, these powerful pre-trained models still lack control to guide key material appearance properties, such as gloss. In this work, we present a threefold contribution: (1) we analyze how gloss is perceived across different artistic styles (i.e., oil painting, watercolor, ink pen, charcoal, and soft crayon); (2) we leverage our findings to create a dataset with 1,336,272 stylized images of many different geometries in all five styles, including automatically-computed text descriptions of their appearance (e.g., ''A glossy bunny hand painted with an orange soft crayon''); and (3) we train ControlNet to condition Stable Diffusion XL synthesizing novel painterly depictions of new objects, using simple inputs such as edge maps, hand-drawn sketches, or clip arts. Compared to previous approaches, our framework yields more accurate results despite the simplified input, as we show both quantitative and qualitatively.Item DiffNEG: A Differentiable Rasterization Framework for Online Aiming Optimization in Solar Power Tower Systems(The Eurographics Association and John Wiley & Sons Ltd., 2025) Zheng, Cangping; Lin, Xiaoxia; Li, Dongshuai; Zhao, Yuhong; Feng, Jieqing; Wang, Beibei; Wilkie, AlexanderInverse rendering aims to infer scene parameters from observed images. In Solar Power Tower (SPT) systems, this corresponds to an aiming optimization problem-adjusting heliostats' orientations to shape the radiative flux density distribution (RFDD) on the receiver to conform to a desired distribution. The SPT system is widely favored in the field of renewable energy, where aiming optimization is crucial for ensuring its thermal efficiency and safety. However, traditional aiming optimization methods are inefficient and fail to meet online demands. In this paper, a novel optimization approach, DiffNEG, is proposed. DiffNEG introduces a differentiable rasterization method to model the reflected radiative flux of each heliostat as an elliptical Gaussian distribution. It leverages data-driven techniques to enhance simulation accuracy and employs automatic differentiation combined with gradient descent to achieve online, gradient-guided optimization in a continuous solution space. Experiments on a real large-scale heliostat field with nearly 30,000 heliostats demonstrate that DiffNEG can optimize within 10 seconds, improving efficiency by one order of magnitude compared to the latest DiffMCRT method and by three orders of magnitude compared to traditional heuristic methods, while also exhibiting superior robustness under both steady and transient state.Item Neural Field Multi-view Shape-from-polarisation(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wanaset, Rapee; Guarnera, Giuseppe Claudio; Smith, William A. P.; Wang, Beibei; Wilkie, AlexanderWe tackle the problem of multi-view shape-from-polarisation using a neural implicit surface representation and volume rendering of a polarised neural radiance field (P-NeRF). The P-NeRF predicts the parameters of a mixed diffuse/specular polarisation model. This directly relates polarisation behaviour to the surface normal without explicitly modelling illumination or BRDF. Via the implicit surface representation, this allows polarisation to directly inform the estimated geometry. This improves shape estimation and also allows separation of diffuse and specular radiance. For polarimetric images from division-of-focal-plane sensors, we fit directly to the raw data without first demosaicing. This avoids fitting to demosaicing artefacts and we propose losses and saturation masking specifically to handle HDR measurements. Our method achieves state-of-the-art performance on the PANDORA benchmark. We apply our method in a lightstage setting, providing single-shot face capture.Item Rendering 2025 CGF 44-4: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2025) Wang, Beibei; Wilkie, Alexander; Wang, Beibei; Wilkie, AlexanderItem Continuous-Line Image Stylization Based on Hilbert Curve(The Eurographics Association and John Wiley & Sons Ltd., 2025) Tong, Zhifang; Zuo, Bolei; Yang, Xiaoxia; Liu, Shengjun; Liu, Xinru; Wang, Beibei; Wilkie, AlexanderHorizontal and vertical lines hold significant aesthetic and psychological importance, providing a sense of order, stability, and security. This paper presents an image stylization method that quickly generates non-self-intersecting and regular continuous lines based on the Hilbert curve, a well-known space-filling curve consisting of only horizontal and vertical segments. We first calculate the grayscale threshold based on gray quantization for the original image and recursively subdivide the cells according to the density in each cell. To avoid generating new feature curves due to limited gray quantization, a recursive subdivision with probability is designed to smooth the density. Then, we utilize the rule of Hilbert curve to generate continuous lines connecting all the cells. Between different degrees of Hilbert curves, bridge curves composed of horizontal and vertical lines are constructed, which are also intersection-free, instead of a straight line linking them directly. There are two parameters provided for feasibly adjusting variate effects. The image stylization framework could be generalized to other space-filling curves like the Peano curve. Compared to existing methods, our approach can generate pleasing results quickly and is fully automated. Many results show our method is robust and effective.Item Importance Sampling of the Micrograin Visible NDF(The Eurographics Association and John Wiley & Sons Ltd., 2025) Lucas, Simon; Pacanowski, Romain; Barla, Pascal; Wang, Beibei; Wilkie, AlexanderImportance sampling of visible normal distribution functions (vNDF) is a required ingredient for the efficient rendering of microfacet-based materials. In this paper, we explain how to sample the vNDF for the micrograin material model [LRPB23], which has been recently improved to handle height-normal correlations through a new Geometric Attenuation Factor (GAF) [LRPB24], leading to a stronger impact on appearance compared to the earlier Smith approximation. To this end, we make two contributions: we derive analytic expressions for the marginal and conditional cumulative distribution functions (CDFs) of the vNDF; we provide efficient methods for inverting these CDFs based respectively on a 2D lookup table and on the triangle-cut method [Hei20].