Browsing by Author "Deschaintre, Valentin"
Now showing 1 - 6 of 6
Results Per Page
Sort Options
Item Controlling Material Appearance by Examples(The Eurographics Association and John Wiley & Sons Ltd., 2022) Hu, Yiwei; Hašan, Miloš; Guerrero, Paul; Rushmeier, Holly; Deschaintre, Valentin; Ghosh, Abhijeet; Wei, Li-YiDespite the ubiquitous use of materials maps in modern rendering pipelines, their editing and control remains a challenge. In this paper, we present an example-based material control method to augment input material maps based on user-provided material photos. We train a tileable version of MaterialGAN and leverage its material prior to guide the appearance transfer, optimizing its latent space using differentiable rendering. Our method transfers the micro and meso-structure textures of user provided target(s) photographs, while preserving the structure and quality of the input material. We show our methods can control existing material maps, increasing realism or generating new, visually appealing materials.Item Flexible SVBRDF Capture with a Multi-Image Deep Network(The Eurographics Association and John Wiley & Sons Ltd., 2019) Deschaintre, Valentin; Aittala, Miika; Durand, Fredo; Drettakis, George; Bousseau, Adrien; Boubekeur, Tamy and Sen, PradeepEmpowered by deep learning, recent methods for material capture can estimate a spatially-varying reflectance from a single photograph. Such lightweight capture is in stark contrast with the tens or hundreds of pictures required by traditional optimization-based approaches. However, a single image is often simply not enough to observe the rich appearance of realworld materials. We present a deep-learning method capable of estimating material appearance from a variable number of uncalibrated and unordered pictures captured with a handheld camera and flash. Thanks to an order-independent fusing layer, this architecture extracts the most useful information from each picture, while benefiting from strong priors learned from data. The method can handle both view and light direction variation without calibration. We show how our method improves its prediction with the number of input pictures, and reaches high quality reconstructions with as little as 1 to 10 images - a sweet spot between existing single-image and complex multi-image approaches.Item Floaters No More: Radiance Field Gradient Scaling for Improved Near-Camera Training(The Eurographics Association, 2023) Philip, Julien; Deschaintre, Valentin; Ritschel, Tobias; Weidlich, AndreaNeRF acquisition typically requires careful choice of near planes for the different cameras or suffers from background collapse, creating floating artifacts on the edges of the captured scene. The key insight of this work is that background collapse is caused by a higher density of samples in regions near cameras. As a result of this sampling imbalance, near-camera volumes receive significantly more gradients, leading to incorrect density buildup. We propose a gradient scaling approach to counter-balance this sampling imbalance, removing the need for near planes, while preventing background collapse. Our method can be implemented in a few lines, does not induce any significant overhead, and is compatible with most NeRF implementations.Item Guided Fine-Tuning for Large-Scale Material Transfer(The Eurographics Association and John Wiley & Sons Ltd., 2020) Deschaintre, Valentin; Drettakis, George; Bousseau, Adrien; Dachsbacher, Carsten and Pharr, MattWe present a method to transfer the appearance of one or a few exemplar SVBRDFs to a target image representing similar materials. Our solution is extremely simple: we fine-tune a deep appearance-capture network on the provided exemplars, such that it learns to extract similar SVBRDF values from the target image. We introduce two novel material capture and design workflows that demonstrate the strength of this simple approach. Our first workflow allows to produce plausible SVBRDFs of large-scale objects from only a few pictures. Specifically, users only need take a single picture of a large surface and a few close-up flash pictures of some of its details.We use existing methods to extract SVBRDF parameters from the close-ups, and our method to transfer these parameters to the entire surface, enabling the lightweight capture of surfaces several meters wide such as murals, floors and furniture. In our second workflow, we provide a powerful way for users to create large SVBRDFs from internet pictures by transferring the appearance of existing, pre-designed SVBRDFs. By selecting different exemplars, users can control the materials assigned to the target image, greatly enhancing the creative possibilities offered by deep appearance capture.Item A Semi‐Procedural Convolutional Material Prior(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Zhou, Xilong; Hašan, Miloš; Deschaintre, Valentin; Guerrero, Paul; Sunkavalli, Kalyan; Kalantari, Nima Khademi; Hauser, Helwig and Alliez, PierreLightweight material capture methods require a material prior, defining the subspace of plausible textures within the large space of unconstrained texel grids. Previous work has either used deep neural networks (trained on large synthetic material datasets) or procedural node graphs (constructed by expert artists) as such priors. In this paper, we propose a semi‐procedural differentiable material prior that represents materials as a set of (typically procedural) grayscale noises and patterns that are processed by a sequence of lightweight learnable convolutional filter operations. We demonstrate that the restricted structure of this architecture acts as an inductive bias on the space of material appearances, allowing us to optimize the weights of the convolutions per‐material, with no need for pre‐training on a large dataset. Combined with a differentiable rendering step and a perceptual loss, we enable single‐image tileable material capture comparable with state of the art. Our approach does not target the pixel‐perfect recovery of the material, but rather uses noises and patterns as input to match the target appearance. To achieve this, it does not require complex procedural graphs, and has a much lower complexity, computational cost and storage cost. We also enable control over the results, through changing the provided patterns and using guide maps to push the material properties towards a user‐driven objective.Item Spectral Upsampling Approaches for RGB Illumination(The Eurographics Association, 2022) Guarnera, Giuseppe Claudio; Gitlina, Yuliya; Deschaintre, Valentin; Ghosh, Abhijeet; Ghosh, Abhijeet; Wei, Li-YiWe present two practical approaches for high fidelity spectral upsampling of previously recorded RGB illumination in the form of an image-based representation such as an RGB light probe. Unlike previous approaches that require multiple measurements with a spectrometer or a reference color chart under a target illumination environment, our method requires no additional information for the spectral upsampling step. Instead, we construct a data-driven basis of spectral distributions for incident illumination from a set of six RGBW LEDs (three narrowband and three broadband) that we employ to represent a given RGB color using a convex combination of the six basis spectra. We propose two different approaches for estimating the weights of the convex combination using – (a) genetic algorithm, and (b) neural networks. We additionally propose a theoretical basis consisting of a set of narrow and broad Gaussians as a generalization of the approach, and also evaluate an alternate LED basis for spectral upsampling. We achieve good qualitative matches of the predicted illumination spectrum using our spectral upsampling approach to ground truth illumination spectrum while achieving near perfect matching of the RGB color of the given illumination in the vast majority of cases. We demonstrate that the spectrally upsampled RGB illumination can be employed for various applications including improved lighting reproduction as well as more accurate spectral rendering.