Volume 42 (2023)
Permanent URI for this community
Browse
Browsing Volume 42 (2023) by Issue Date
Now showing 1 - 20 of 243
Results Per Page
Sort Options
Item Corrigendum to “Making Procedural Water Waves Boundary‐aware”, “Primal/Dual Descent Methods for Dynamics”, and “Detailed Rigid Body Simulation with Extended Position Based Dynamics”(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hauser, Helwig and Alliez, PierreItem 3D Generative Model Latent Disentanglement via Local Eigenprojection(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Foti, Simone; Koo, Bongjin; Stoyanov, Danail; Clarkson, Matthew J.; Hauser, Helwig and Alliez, PierreDesigning realistic digital humans is extremely complex. Most data‐driven generative models used to simplify the creation of their underlying geometric shape do not offer control over the generation of local shape attributes. In this paper, we overcome this limitation by introducing a novel loss function grounded in spectral geometry and applicable to different neural‐network‐based generative models of 3D head and body meshes. Encouraging the latent variables of mesh variational autoencoders (VAEs) or generative adversarial networks (GANs) to follow the local eigenprojections of identity attributes, we improve latent disentanglement and properly decouple the attribute creation. Experimental results show that our local eigenprojection disentangled (LED) models not only offer improved disentanglement with respect to the state‐of‐the‐art, but also maintain good generation capabilities with training times comparable to the vanilla implementations of the models. Our code and pre‐trained models are available at .Item MesoGAN: Generative Neural Reflectance Shells(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Diolatzis, Stavros; Novak, Jan; Rousselle, Fabrice; Granskog, Jonathan; Aittala, Miika; Ramamoorthi, Ravi; Drettakis, George; Hauser, Helwig and Alliez, PierreWe introduce MesoGAN, a model for generative 3D neural textures. This new graphics primitive represents mesoscale appearance by combining the strengths of generative adversarial networks (StyleGAN) and volumetric neural field rendering. The primitive can be applied to surfaces as a neural reflectance shell; a thin volumetric layer above the surface with appearance parameters defined by a neural network. To construct the neural shell, we first generate a 2D feature texture using StyleGAN with carefully randomized Fourier features to support arbitrarily sized textures without repeating artefacts. We augment the 2D feature texture with a learned height feature, which aids the neural field renderer in producing volumetric parameters from the 2D texture. To facilitate filtering, and to enable end‐to‐end training within memory constraints of current hardware, we utilize a hierarchical texturing approach and train our model on multi‐scale synthetic datasets of 3D mesoscale structures. We propose one possible approach for conditioning MesoGAN on artistic parameters (e.g. fibre length, density of strands, lighting direction) and demonstrate and discuss integration into physically based renderers.Item Unsupervised Template Warp Consistency for Implicit Surface Correspondences(The Eurographics Association and John Wiley & Sons Ltd., 2023) Liu, Mengya; Chhatkuli, Ajad; Postels, Janis; Gool, Luc Van; Tombari, Federico; Myszkowski, Karol; Niessner, MatthiasUnsupervised template discovery via implicit representation in a category of shapes has recently shown strong performance. At the core, such methods deform input shapes to a common template space which allows establishing correspondences as well as implicit representation of the shapes. In this work we investigate the inherent assumption that the implicit neural field optimization naturally leads to consistently warped shapes, thus providing both good shape reconstruction and correspondences. Contrary to this convenient assumption, in practice we observe that such is not the case, consequently resulting in sub-optimal point correspondences. In order to solve the problem, we re-visit the warp design and more importantly introduce explicit constraints using unsupervised sparse point predictions, directly encouraging consistency of the warped shapes. We use the unsupervised sparse keypoints in order to further condition the deformation warp and enforce the consistency of the deformation warp. Experiments in dynamic non-rigid DFaust and ShapeNet categories show that our problem identification and solution provide the new state-of-the-art in unsupervised dense correspondences.Item Edge-Friend: Fast and Deterministic Catmull-Clark Subdivision Surfaces(The Eurographics Association and John Wiley & Sons Ltd., 2023) Kuth, Bastian; Oberberger, Max; Chajdas, Matthäus; Meyer, Quirin; Bikker, Jacco; Gribble, ChristiaanWe present edge-friend, a data structure for quad meshes with access to neighborhood information required for Catmull-Clark subdivision surface refinement. Edge-friend enables efficient real-time subdivision surface rendering. In particular, the resulting algorithm is deterministic, does not require hardware support for atomic floating-point arithmetic, and is optimized for efficient rendering on GPUs. Edge-friend exploits that after one subdivision step, two edges can be uniquely and implicitly assigned to each quad. Additionally, edge-friend is a compact data structure, adding little overhead. Our algorithm is simple to implement in a single compute shader kernel, and requires minimal synchronization which makes it particularly suited for asynchronous execution. We easily extend our kernel to support relevant Catmull-Clark subdivision surface features, including semi-smooth creases, boundaries, animation and attribute interpolation. In case of topology changes, our data structure requires little preprocessing, making it amendable for a variety of applications, including real-time editing and animations. Our method can process and render billions of triangles per second on modern GPUs. For a sample mesh, our algorithm generates and renders 2.9 million triangles in 0.58ms on an AMD Radeon RX 7900 XTX GPU.Item High-Performance Graphics 2023 CGF 42-8: Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2023) Bikker, Jacco; Gribble, Christiaan; Bikker, Jacco; Gribble, ChristiaanItem Immersive Free‐Viewpoint Panorama Rendering from Omnidirectional Stereo Video(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Mühlhausen, Moritz; Kappel, Moritz; Kassubeck, Marc; Wöhler, Leslie; Grogorick, Steve; Castillo, Susana; Eisemann, Martin; Magnor, Marcus; Hauser, Helwig and Alliez, PierreIn this paper, we tackle the challenging problem of rendering real‐world 360° panorama videos that support full 6 degrees‐of‐freedom (DoF) head motion from a prerecorded omnidirectional stereo (ODS) video. In contrast to recent approaches that create novel views for individual panorama frames, we introduce a video‐specific temporally‐consistent multi‐sphere image (MSI) scene representation. Given a conventional ODS video, we first extract information by estimating framewise descriptive feature maps. Then, we optimize the global MSI model using theory from recent research on neural radiance fields. Instead of a continuous scene function, this multi‐sphere image (MSI) representation depicts colour and density information only for a discrete set of concentric spheres. To further improve the temporal consistency of our results, we apply an ancillary refinement step which optimizes the temporal coherency between successive video frames. Direct comparisons to recent baseline approaches show that our global MSI optimization yields superior performance in terms of visual quality. Our code and data will be made publicly available.Item EUROGRAPHICS 2023: CGF 42-2 Frontmatter(The Eurographics Association and John Wiley & Sons Ltd., 2023) Myszkowski, Karol; Niessner, Matthias; Myszkowski, Karol; Niessner, MatthiasItem Variational Pose Prediction with Dynamic Sample Selection from Sparse Tracking Signals(The Eurographics Association and John Wiley & Sons Ltd., 2023) Milef, Nicholas; Sueda, Shinjiro; Kalantari, Nima Khademi; Myszkowski, Karol; Niessner, MatthiasWe propose a learning-based approach for full-body pose reconstruction from extremely sparse upper body tracking data, obtained from a virtual reality (VR) device. We leverage a conditional variational autoencoder with gated recurrent units to synthesize plausible and temporally coherent motions from 4-point tracking (head, hands, and waist positions and orientations). To avoid synthesizing implausible poses, we propose a novel sample selection and interpolation strategy along with an anomaly detection algorithm. Specifically, we monitor the quality of our generated poses using the anomaly detection algorithm and smoothly transition to better samples when the quality falls below a statistically defined threshold. Moreover, we demonstrate that our sample selection and interpolation method can be used for other applications, such as target hitting and collision avoidance, where the generated motions should adhere to the constraints of the virtual environment. Our system is lightweight, operates in real-time, and is able to produce temporally coherent and realistic motions.Item Dissection Puzzles Composed of Multicolor Polyominoes(The Eurographics Association and John Wiley & Sons Ltd., 2023) Kita, Naoki; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Dissection puzzles leverage geometric dissections, wherein a set of puzzle pieces can be reassembled in various configurations to yield unique geometric figures. Mathematically, a dissection between two 2D polygons can always be established. Consequently, researchers and puzzle enthusiasts strive to design unique dissection puzzles using the fewest pieces feasible. In this study, we introduce novel dissection puzzles crafted with multi-colored polyominoes. Diverging from the traditional aim of establishing geometric dissection between two 2D polygons with the minimal piece count, we seek to identify a common pool of polyomino pieces with colored faces that can be configured into multiple distinct shapes and appearances. Moreover, we offer a method to identify an optimized sequence for rearranging pieces from one form to another, thus minimizing the total relocation distance. This approach can guide users in puzzle assembly and lessen their physical exertion when manually reconfiguring pieces. It could potentially also decrease power consumption when pieces are reorganized using robotic assistance. We showcase the efficacy of our proposed approach through a wide range of shapes and appearances.Item Robust Distribution-aware Color Correction for Single-shot Images(The Eurographics Association and John Wiley & Sons Ltd., 2023) Dhillon, Daljit Singh J.; Joshi, Parisha; Baron, Jessica; Patterson, Eric K.; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Color correction for photographed images is an ill-posed problem. It is also a crucial initial step towards material acquisition for inverse rendering methods or pipelines. Several state-of-the-art methods rely on reducing color differences for imaged reference color chart blocks of known color values to devise or optimize their solution. In this paper, we first establish through simulations the limitation of this minimality criteria which in principle results in overfitting. Next, we study and propose a few spatial distribution measures to augment the evaluation criteria. Thereafter, we propose a novel patch-based, white-point centric approach that processes luminance and chrominance information separately to improve on the color matching task. We compare our method qualitatively with several state-of-the art methods using our augmented evaluation criteria along with quantitative examinations. Finally, we perform rigorous experiments and demonstrate results to clearly establish the benefits of our proposed method.Item Subpixel Deblurring of Anti-Aliased Raster Clip-Art(The Eurographics Association and John Wiley & Sons Ltd., 2023) Yang, Jinfan; Vining, Nicholas; Kheradmand, Shakiba; Carr, Nathan; Sigal, Leonid; Sheffer, Alla; Myszkowski, Karol; Niessner, MatthiasArtist generated clip-art images typically consist of a small number of distinct, uniformly colored regions with clear boundaries. Legacy artist created images are often stored in low-resolution (100x100px or less) anti-aliased raster form. Compared to anti-aliasing free rasterization, anti-aliasing blurs inter-region boundaries and obscures the artist's intended region topology and color palette; at the same time, it better preserves subpixel details. Recovering the underlying artist-intended images from their low-resolution anti-aliased rasterizations can facilitate resolution independent rendering, lossless vectorization, and other image processing applications. Unfortunately, while human observers can mentally deblur these low-resolution images and reconstruct region topology, color and subpixel details, existing algorithms applicable to this task fail to produce outputs consistent with human expectations when presented with such images. We recover these viewer perceived blur-free images at subpixel resolution, producing outputs where each input pixel is replaced by four corresponding (sub)pixels. Performing this task requires computing the size of the output image color palette, generating the palette itself, and associating each pixel in the output with one of the colors in the palette. We obtain these desired output components by leveraging a combination of perceptual and domain priors, and real world data. We use readily available data to train a network that predicts, for each antialiased image, a low-blur approximation of the blur-free double-resolution outputs we seek. The images obtained at this stage are perceptually closer to the desired outputs but typically still have hundreds of redundant differently colored regions with fuzzy boundaries. We convert these low-blur intermediate images into blur-free outputs consistent with viewer expectations using a discrete partitioning procedure guided by the characteristic properties of clip-art images, observations about the antialiasing process, and human perception of anti-aliased clip-art. This step dramatically reduces the size of the output color palettes, and the region counts bringing them in line with viewer expectations and enabling the image processing applications we target. We demonstrate the utility of our method by using our outputs for a number of image processing tasks, and validate it via extensive comparisons to prior art. In our comparative study, participants preferred our deblurred outputs over those produced by the best-performing alternative by a ratio of 75 to 8.5.Item 3D Object Tracking for Rough Models(The Eurographics Association and John Wiley & Sons Ltd., 2023) Song, Xiuqiang; Xie, Weijian; Li, Jiachen; Wang, Nan; Zhong, Fan; Zhang, Guofeng; Qin, Xueying; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Visual monocular 6D pose tracking methods for textureless or weakly-textured objects heavily rely on contour constraints established by the precise 3D model. However, precise models are not always available in reality, and rough models can potentially degrade tracking performance and impede the widespread usage of 3D object tracking. To address this new problem, we propose a novel tracking method that handles rough models. We reshape the rough contour through the probability map, which can avoid explicitly processing the 3D rough model itself. We further emphasize the inner region information of the object, where the points are sampled to provide color constrains. To sufficiently satisfy the assumption of small displacement between frames, the 2D translation of the object is pre-searched for a better initial pose. Finally, we combine constraints from both the contour and inner region to optimize the object pose. Experimental results demonstrate that the proposed method achieves state-of-the-art performance on both roughly and precisely modeled objects. Particularly for the highly rough model, the accuracy is significantly improved (40.4% v.s. 16.9%).Item NEnv: Neural Environment Maps for Global Illumination(The Eurographics Association and John Wiley & Sons Ltd., 2023) Rodriguez-Pardo, Carlos; Fabre, Javier; Garces, Elena; Lopez-Moreno, Jorge; Ritschel, Tobias; Weidlich, AndreaEnvironment maps are commonly used to represent and compute far-field illumination in virtual scenes. However, they are expensive to evaluate and sample from, limiting their applicability to real-time rendering. Previous methods have focused on compression through spherical-domain approximations, or on learning priors for natural, day-light illumination. These hinder both accuracy and generality, and do not provide the probability information required for importance-sampling Monte Carlo integration. We propose NEnv, a deep-learning fully-differentiable method, capable of compressing and learning to sample from a single environment map. NEnv is composed of two different neural networks: A normalizing flow, able to map samples from uniform distributions to the probability density of the illumination, also providing their corresponding probabilities; and an implicit neural representation which compresses the environment map into an efficient differentiable function. The computation time of environment samples with NEnv is two orders of magnitude less than with traditional methods. NEnv makes no assumptions regarding the content (i.e. natural illumination), thus achieving higher generality than previous learning-based approaches. We share our implementation and a diverse dataset of trained neural environment maps, which can be easily integrated into existing rendering engines.Item Attention And Positional Encoding Are (Almost) All You Need For Shape Matching(The Eurographics Association and John Wiley & Sons Ltd., 2023) Raganato, Alessandro; Pasi, Gabriella; Melzi, Simone; Memari, Pooran; Solomon, JustinThe fast development of novel approaches derived from the Transformers architecture has led to outstanding performance in different scenarios, from Natural Language Processing to Computer Vision. Recently, they achieved impressive results even in the challenging task of non-rigid shape matching. However, little is known about the capability of the Transformer-encoder architecture for the shape matching task, and its performances still remained largely unexplored. In this paper, we step back and investigate the contribution made by the Transformer-encoder architecture compared to its more recent alternatives, focusing on why and how it works on this specific task. Thanks to the versatility of our implementation, we can harness the bi-directional structure of the correspondence problem, making it more interpretable. Furthermore, we prove that positional encodings are essential for processing unordered point clouds. Through a comprehensive set of experiments, we find that attention and positional encoding are (almost) all you need for shape matching. The simple Transformer-encoder architecture, coupled with relative position encoding in the attention mechanism, is able to obtain strong improvements, reaching the current state-of-the-art.Item Factored Neural Representation for Scene Understanding(The Eurographics Association and John Wiley & Sons Ltd., 2023) Wong, Yu-Shiang; Mitra, Niloy J.; Memari, Pooran; Solomon, JustinA long-standing goal in scene understanding is to obtain interpretable and editable representations that can be directly constructed from a raw monocular RGB-D video, without requiring specialized hardware setup or priors. The problem is significantly more challenging in the presence of multiple moving and/or deforming objects. Traditional methods have approached the setup with a mix of simplifications, scene priors, pretrained templates, or known deformation models. The advent of neural representations, especially neural implicit representations and radiance fields, opens the possibility of end-to-end optimization to collectively capture geometry, appearance, and object motion. However, current approaches produce global scene encoding, assume multiview capture with limited or no motion in the scenes, and do not facilitate easy manipulation beyond novel view synthesis. In this work, we introduce a factored neural scene representation that can directly be learned from a monocular RGB-D video to produce object-level neural presentations with an explicit encoding of object movement (e.g., rigid trajectory) and/or deformations (e.g., nonrigid movement). We evaluate ours against a set of neural approaches on both synthetic and real data to demonstrate that the representation is efficient, interpretable, and editable (e.g., change object trajectory). Code and data are available at: http://geometry.cs.ucl.ac.uk/projects/2023/factorednerf/.Item Mesh Draping: Parametrization‐Free Neural Mesh Transfer(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Hertz, A.; Perel, O.; Giryes, R.; Sorkine‐Hornung, O.; Cohen‐Or, D.; Hauser, Helwig and Alliez, PierreDespite recent advances in geometric modelling, 3D mesh modelling still involves a considerable amount of manual labour by experts. In this paper, we introduce Mesh Draping: a neural method for transferring existing mesh structure from one shape to another. The method drapes the source mesh over the target geometry and at the same time seeks to preserve the carefully designed characteristics of the source mesh. At its core, our method deforms the source mesh using progressive positional encoding (PE). We show that by leveraging gradually increasing frequencies to guide the neural optimization, we are able to achieve stable and high‐quality mesh transfer. Our approach is simple and requires little user guidance, compared to contemporary surface mapping techniques which rely on parametrization or careful manual tuning. Most importantly, Mesh Draping is a parameterization‐free method, and thus applicable to a variety of target shape representations, including point clouds, polygon soups and non‐manifold meshes. We demonstrate that the transferred meshing remains faithful to the source mesh design characteristics, and at the same time fits the target geometry well.Item Numerical Coarsening with Neural Shape Functions(© 2023 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ni, Ning; Xu, Qingyu; Li, Zhehao; Fu, Xiao‐Ming; Liu, Ligang; Hauser, Helwig and Alliez, PierreWe propose to use nonlinear shape functions represented as neural networks in numerical coarsening to achieve generalization capability as well as good accuracy. To overcome the challenge of generalization to different simulation scenarios, especially nonlinear materials under large deformations, our key idea is to replace the linear mapping between coarse and fine meshes adopted in previous works with a nonlinear one represented by neural networks. However, directly applying an end‐to‐end neural representation leads to poor performance due to over‐huge parameter space as well as failing to capture some intrinsic geometry properties of shape functions. Our solution is to embed geometry constraints as the prior knowledge in learning, which greatly improves training efficiency and inference robustness. With the trained neural shape functions, we can easily adopt numerical coarsening in the simulation of various hyperelastic models without any other preprocessing step required. The experiment results demonstrate the efficiency and generalization capability of our method over previous works.Item Multi-Level Implicit Function for Detailed Human Reconstruction by Relaxing SMPL Constraints(The Eurographics Association and John Wiley & Sons Ltd., 2023) Ma, Xikai; Zhao, Jieyu; Teng, Yiqing; Yao, Li; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.Aiming at enhancing the rationality and robustness of the results of single-view image-based human reconstruction and acquiring richer surface details, we propose a multi-level reconstruction framework based on implicit functions.This framework first utilizes the predicted SMPL model (Skinned Multi-Person Linear Model) as a prior to further predict consistent 2.5D sketches (depth map and normal map), and then obtains a coarse reconstruction result through an Implicit Function fitting network (IF-Net). Subsequently, with a pixel-aligned feature extraction module and a fine IF-Net, the strong constraints imposed by SMPL are relaxed to add more surface details to the reconstruction result and remove noise. Finally, to address the trade-off between surface details and rationality under complex poses, we propose a novel fusion repair algorithm that reuses existing information. This algorithm compensates for the missing parts of the fine reconstruction results with the coarse reconstruction results, leading to a robust, rational, and richly detailed reconstruction. The final experiments prove the effectiveness of our method and demonstrate that it achieves the richest surface details while ensuring rationality. The project website can be found at https://github.com/MXKKK/2.5D-MLIF.Item Face Editing Using Part-Based Optimization of the Latent Space(The Eurographics Association and John Wiley & Sons Ltd., 2023) Aliari, Mohammad Amin; Beauchamp, Andre; Popa, Tiberiu; Paquette, Eric; Myszkowski, Karol; Niessner, MatthiasWe propose an approach for interactive 3D face editing based on deep generative models. Most of the current face modeling methods rely on linear methods and cannot express complex and non-linear deformations. In contrast to 3D morphable face models based on Principal Component Analysis (PCA), we introduce a novel architecture based on variational autoencoders. Our architecture has multiple encoders (one for each part of the face, such as the nose and mouth) which feed a single decoder. As a result, each sub-vector of the latent vector represents one part. We train our model with a novel loss function that further disentangles the space based on different parts of the face. The output of the network is a whole 3D face. Hence, unlike partbased PCA methods, our model learns to merge the parts intrinsically and does not require an additional merging process. To achieve interactive face modeling, we optimize for the latent variables given vertex positional constraints provided by a user. To avoid unwanted global changes elsewhere on the face, we only optimize the subset of the latent vector that corresponds to the part of the face being modified. Our editing optimization converges in less than a second. Our results show that the proposed approach supports a broader range of editing constraints and generates more realistic 3D faces.