41-Issue 6
Permanent URI for this collection
Browse
Browsing 41-Issue 6 by Issue Date
Now showing 1 - 20 of 32
Results Per Page
Sort Options
Item Harmonics Virtual Lights: Fast Projection of Luminance Field on Spherical Harmonics for Efficient Rendering(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Mézières, Pierre; Desrichard, François; Vanderhaeghe, David; Paulin, Mathias; Hauser, Helwig and Alliez, PierreIn this paper, we introduce harmonics virtual lights (HVL), to model indirect light sources for interactive global illumination of dynamic 3D scenes. Virtual point lights (VPL) are an efficient approach to define indirect light sources and to evaluate the resulting indirect lighting. Nonetheless, VPL suffer from disturbing artefacts, especially with high‐frequency materials. Virtual spherical lights (VSL) avoid these artefacts by considering spheres instead of points but estimates the lighting integral using Monte‐Carlo which results to noise in the final image. We define HVL as an extension of VSL in a spherical harmonics (SH) framework, defining a closed form of the lighting integral evaluation. We propose an efficient SH projection of spherical lights contribution faster than existing methods. Computing the outgoing luminance requires operations when using materials with circular symmetric lobes, and operations for the general case, where is the number of SH bands. HVL can be used with either parametric or measured BRDF without extra cost and offers control over rendering time and image quality, by either decreasing or increasing the band limit used for SH projection. Our approach is particularly well‐designed to render medium‐frequency one‐bounce global illumination with arbitrary BRDF at an interactive frame rate.Item SVBRDF Recovery from a Single Image with Highlights Using a Pre‐trained Generative Adversarial Network(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Wen, Tao; Wang, Beibei; Zhang, Lei; Guo, Jie; Holzschuch, Nicolas; Hauser, Helwig and Alliez, PierreSpatially varying bi‐directional reflectance distribution functions (SVBRDFs) are crucial for designers to incorporate new materials in virtual scenes, making them look more realistic. Reconstruction of SVBRDFs is a long‐standing problem. Existing methods either rely on an extensive acquisition system or require huge datasets, which are non‐trivial to acquire. We aim to recover SVBRDFs from a single image, without any datasets. A single image contains incomplete information about the SVBRDF, making the reconstruction task highly ill‐posed. It is also difficult to separate between the changes in colour that are caused by the material and those caused by the illumination, without the prior knowledge learned from the dataset. In this paper, we use an unsupervised generative adversarial neural network (GAN) to recover SVBRDFs maps with a single image as input. To better separate the effects due to illumination from the effects due to the material, we add the hypothesis that the material is stationary and introduce a new loss function based on Fourier coefficients to enforce this stationarity. For efficiency, we train the network in two stages: reusing a trained model to initialize the SVBRDFs and fine‐tune it based on the input image. Our method generates high‐quality SVBRDFs maps from a single input photograph, and provides more vivid rendering results compared to the previous work. The two‐stage training boosts runtime performance, making it eight times faster than the previous work.Item Issue Information(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hauser, Helwig and Alliez, PierreItem Learning Human Viewpoint Preferences from Sparsely Annotated Models(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hartwig, S.; Schelling, M.; Onzenoodt, C. v.; Vázquez, P.‐P.; Hermosilla, P.; Ropinski, T.; Hauser, Helwig and Alliez, PierreView quality measures compute scores for given views and are used to determine an optimal view in viewpoint selection tasks. Unfortunately, despite the wide adoption of these measures, they are rather based on computational quantities, such as entropy, than human preferences. To instead tailor viewpoint measures towards humans, view quality measures need to be able to capture human viewpoint preferences. Therefore, we introduce a large‐scale crowdsourced data set, which contains 58 annotated viewpoints for 3220 ModelNet40 models. Based on this data, we derive a neural view quality measure abiding to human preferences. We further demonstrate that this view quality measure not only generalizes to models unseen during training, but also to unseen model categories. We are thus able to predict view qualities for single images, and directly predict human preferred viewpoints for 3D models by exploiting point‐based learning technology, without requiring to generate intermediate images or sampling the view sphere. We will detail our data collection procedure, describe the data analysis and model training and will evaluate the predictive quality of our trained viewpoint measure on unseen models and categories. To our knowledge, this is the first deep learning approach to predict a view quality measure solely based on human preferences.Item RfX: A Design Study for the Interactive Exploration of a Random Forest to Enhance Testing Procedures for Electrical Engines(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Eirich, J.; Münch, M.; Jäckle, D.; Sedlmair, M.; Bonart, J.; Schreck, T.; Hauser, Helwig and Alliez, PierreRandom Forests (RFs) are a machine learning (ML) technique widely used across industries. The interpretation of a given RF usually relies on the analysis of statistical values and is often only possible for data analytics experts. To make RFs accessible to experts with no data analytics background, we present RfX, a Visual Analytics (VA) system for the analysis of a RF's decision‐making process. RfX allows to interactively analyse the properties of a forest and to explore and compare multiple trees in a RF. Thus, its users can identify relationships within a RF's feature subspace and detect hidden patterns in the model's underlying data. We contribute a design study in collaboration with an automotive company. A formative evaluation of RFX was carried out with two domain experts and a summative evaluation in the form of a field study with five domain experts. In this context, new hidden patterns such as increased eccentricities in an engine's rotor by observing secondary excitations of its bearings were detected using analyses made with RfX. Rules derived from analyses with the system led to a change in the company's testing procedures for electrical engines, which resulted in 80% reduced testing time for over 30% of all components.Item Non‐Isometric Shape Matching via Functional Maps on Landmark‐Adapted Bases(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Panine, Mikhail; Kirgo, Maxime; Ovsjanikov, Maks; Hauser, Helwig and Alliez, PierreWe propose a principled approach for non‐isometric landmark‐preserving non‐rigid shape matching. Our method is based on the functional map framework, but rather than promoting isometries we focus on near‐conformal maps that preserve landmarks exactly. We achieve this, first, by introducing a novel landmark‐adapted basis using an intrinsic Dirichlet‐Steklov eigenproblem. Second, we establish the functional decomposition of conformal maps expressed in this basis. Finally, we formulate a conformally‐invariant energy that promotes high‐quality landmark‐preserving maps, and show how it can be optimized via a variant of the recently proposed ZoomOut method that we extend to our setting. Our method is descriptor‐free, efficient and robust to significant mesh variability. We evaluate our approach on a range of benchmark datasets and demonstrate state‐of‐the‐art performance on non‐isometric benchmarks and near state‐of‐the‐art performance on isometric ones.Item State of the Art in Computational Mould Design(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Alderighi, T.; Malomo, L.; Auzinger, T.; Bickel, B.; Cignoni, P.; Pietroni, N.; Hauser, Helwig and Alliez, PierreMoulding refers to a set of manufacturing techniques in which a mould, usually a cavity or a solid frame, is used to shape a liquid or pliable material into an object of the desired shape. The popularity of moulding comes from its effectiveness, scalability and versatility in terms of employed materials. Its relevance as a fabrication process is demonstrated by the extensive literature covering different aspects related to mould design, from material flow simulation to the automation of mould geometry design. In this state‐of‐the‐art report, we provide an extensive review of the automatic methods for the design of moulds, focusing on contributions from a geometric perspective. We classify existing mould design methods based on their computational approach and the nature of their target moulding process. We summarize the relationships between computational approaches and moulding techniques, highlighting their strengths and limitations. Finally, we discuss potential future research directions.Item Quad‐fisheye Image Stitching for Monoscopic Panorama Reconstruction(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Cheng, Haojie; Xu, Chunxiao; Wang, Jiajun; Zhao, Lingxiao; Hauser, Helwig and Alliez, PierreMonoscopic panorama provides the display of omnidirectional contents surrounding the viewer. An increasingly popular way to reconstruct a panorama is to stitch a collection of fisheye images. However, such non‐planar views may result in problems such as distortions and boundary irregularities. In most cases, the computational expense for stitching non‐planar images is also too high to satisfy real‐time applications. In this paper, a novel monoscopic panorama reconstruction pipeline that produces better quad‐fisheye image stitching results for omnidirectional environment viewing is proposed. The main idea is to apply mesh deformation for image alignment. To optimize inter‐lens parallaxes, unwarped images are firstly cropped and reshuffled to facilitate the circular environment scene composition by the seamless ring‐connection of the panorama borders. Several mesh constraints are then adopted to ensure a high alignment accuracy. After alignment, the boundary of the result is rectified to be rectangular to prevent gapping artefacts. We further extend our approach to video stitching. The temporal smoothness model is added to prevent unexpected artefacts in the panoramic videos. To support interactive applications, our stitching algorithm is programmed using CUDA. The camera motion and average gradient per video frame are further calculated to accelerate for synchronous real‐life panoramic scene reconstruction and visualization. Experimental results demonstrate that our method has advantages in respects of alignment accuracy, adaptability and image quality of the stitching result.Item Erratum: Evaluating Data‐type Heterogeneity in Interactive Visual Analyses with Parallel Axes(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hauser, Helwig and Alliez, PierreItem Rigid Registration of Point Clouds Based on Partial Optimal Transport(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Qin, Hongxing; Zhang, Yucheng; Liu, Zhentao; Chen, Baoquan; Hauser, Helwig and Alliez, PierreFor rigid point cloud data registration, algorithms based on soft correspondences are more robust than the traditional ICP method and its variants. However, point clouds with severe outliers and missing data may lead to imprecise many‐to‐many correspondences and consequently inaccurate registration. In this study, we propose a point cloud registration algorithm based on partial optimal transport via a hard marginal constraint. The hard marginal constraint provides an explicit parameter to adjust the ratio of points that should be accurately matched, and helps avoid incorrect many‐to‐many correspondences. Experiments show that the proposed method achieves state‐of‐the‐art registration results when dealing with point clouds with significant amount of outliers and missing points (see ).Item Error Analysis of Photometric Stereo with Near Quasi‐Point Lights(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Chen, Q.; Ren, Y.; Zhao, Z.; Tao, W.; Zhao, H.; Hauser, Helwig and Alliez, PierreThe shape recovery quality of photometric stereo is sensitive to complicated error factors under close range lighting of quasi‐point lights. However, the error performance of photometric stereo under this practical scenario is still obscure. This paper presents a comprehensive error analysis of photometric stereo with near quasi‐point lights (NQPL‐PS). Five main error factors are identified under this scenario and their corresponding analytical formulations are introduced. Statistic computation and experiments are used to validate the theoretical formulations and inspect the relationships between normal inaccuracies and each type of discrepancies. In addition, the impacts of multiple system parameters of an NQPL‐PS configuration on the normal estimation error are also studied. In order to evaluate the relative importance of various error factors, a probability‐based evaluation criterion is proposed, which focuses on the error performance over the state space and the error space, rather than the simple comparison of the values of normal inaccuracy. The assessment results show that the non‐uniformity of illuminants, and the calibration error in the position of light sources hold the dominant places among those five error factors. This paper provides insights for the accuracy improvement and system design of NQPL‐PS.Item NeRF‐Tex: Neural Reflectance Field Textures(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Baatz, H.; Granskog, J.; Papas, M.; Rousselle, F.; Novák, J.; Hauser, Helwig and Alliez, PierreWe investigate the use of neural fields for modelling diverse mesoscale structures, such as fur, fabric and grass. Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural field (NeRF‐Tex), which jointly models the geometry of the material and its response to lighting. The NeRF‐Tex primitive can be instantiated over a base mesh to ‘texture’ it with the desired meso and microscale appearance. We condition the reflectance field on user‐defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modelled and provides a solution for combating repetitive texturing artifacts. We also demonstrate that NeRF textures naturally facilitate continuous level‐of‐detail rendering. Our approach unites the versatility and modelling power of neural networks with the artistic control needed for precise modelling of virtual scenes. While all our training data are currently synthetic, our work provides a recipe that can be further extended to extract complex, hard‐to‐model appearances from real images.Item Reconstructing Recognizable 3D Face Shapes based on 3D Morphable Models(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Jiang, Diqiong; Jin, Yiwei; Zhang, Fang‐Lue; Lai, Yu‐Kun; Deng, Risheng; Tong, Ruofeng; Tang, Min; Hauser, Helwig and Alliez, PierreMany recent works have reconstructed distinctive 3D face shapes by aggregating shape parameters of the same identity and separating those of different people based on parametric models (e.g. 3D morphable models (3DMMs)). However, despite the high accuracy in the face recognition task using these shape parameters, the visual discrimination of face shapes reconstructed from those parameters remains unsatisfactory. Previous works have not answered the following research question: Do discriminative shape parameters guarantee visual discrimination in represented 3D face shapes? This paper analyses the relationship between shape parameters and reconstructed shape geometry, and proposes a novel shape identity‐aware regularization (SIR) loss for shape parameters, aiming at increasing discriminability in both the shape parameter and shape geometry domains. Moreover, to cope with the lack of training data containing both landmark and identity annotations, we propose a network structure and an associated training strategy to leverage mixed data containing either identity or landmark labels. In addition, since face recognition accuracy does not mean the recognizability of reconstructed face shapes from the shape parameters, we propose the SIR metric to measure the discriminability of face shapes. We compare our method with existing methods in terms of the reconstruction error, visual discriminability, and face recognition accuracy of the shape parameters and SIR metric. Experimental results show that our method outperforms the state‐of‐the‐art methods. The code will be released at .Item Image Representation on Curved Optimal Triangulation(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Xiao, Yanyang; Cao, Juan; Chen, Zhonggui; Hauser, Helwig and Alliez, PierreImage triangulation aims to generate an optimal partition with triangular elements to represent the given image. One bottleneck in ensuring approximation quality between the original image and a piecewise approximation over the triangulation is the inaccurate alignment of straight edges to the curved features. In this paper, we propose a novel variational method called curved optimal triangulation, where not all edges are straight segments, but may also be quadratic Bézier curves. The energy function is defined as the total approximation error determined by vertex locations, connectivity and bending of edges. The gradient formulas of this function are derived explicitly in closed form to optimize the energy function efficiently. We test our method on several models to demonstrate its efficacy and ability in preserving features. We also explore its applications in the automatic generation of stylization and Lowpoly images. With the same number of vertices, our curved optimal triangulation method generates more accurate and visually pleasing results compared with previous methods that only use straight segments.Item Evocube: A Genetic Labelling Framework for Polycube‐Maps(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Dumery, C.; Protais, F.; Mestrallet, S.; Bourcier, C.; Ledoux, F.; Hauser, Helwig and Alliez, PierrePolycube‐maps are used as base‐complexes in various fields of computational geometry, including the generation of regular all‐hexahedral meshes free of internal singularities. However, the strict alignment constraints behind polycube‐based methods make their computation challenging for CAD models used in numerical simulation via finite element method (FEM). We propose a novel approach based on an evolutionary algorithm to robustly compute polycube‐maps in this context.We address the labelling problem, which aims to precompute polycube alignment by assigning one of the base axes to each boundary face on the input. Previous research has described ways to initialize and improve a labelling via greedy local fixes. However, such algorithms lack robustness and often converge to inaccurate solutions for complex geometries. Our proposed framework alleviates this issue by embedding labelling operations in an evolutionary heuristic, defining fitness, crossover, and mutations in the context of labelling optimization. We evaluate our method on a thousand smooth and CAD meshes, showing Evocube converges to accurate labellings on a wide range of shapes. The limitations of our method are also discussed thoroughly.Item CVFont: Synthesizing Chinese Vector Fonts via Deep Layout Inferring(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Lian, Zhouhui; Gao, Yichen; Hauser, Helwig and Alliez, PierreCreating a high‐quality Chinese vector font library, which can be directly used in real applications is time‐consuming and costly, since the font library typically consists of large amounts of vector glyphs. To address this problem, we propose a data‐driven system in which only a small number (about 10%) of Chinese glyphs need to be designed. Specifically, the system first automatically decomposes those input glyphs into vector components. Then, a layout prediction module based on deep neural networks is applied to learn the layout style of input characters. Finally, proper components are selected to assemble the glyph of each unseen character based on the predicted layout to build the font library that can be directly used in computers and smart mobile devices. Experimental results demonstrate that our system synthesizes high‐quality glyphs and significantly enhances the producing efficiency of Chinese vector fonts.Item Event‐based Dynamic Graph Drawing without the Agonizing Pain(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Arleo, A.; Miksch, S.; Archambault, D.; Hauser, Helwig and Alliez, PierreTemporal networks can naturally model real‐world complex phenomena such as contact networks, information dissemination and physical proximity. However, nodes and edges bear real‐time coordinates, making it difficult to organize them into discrete timeslices, without a loss of temporal information due to projection. Event‐based dynamic graph drawing rejects the notion of a timeslice and allows each node and edge to retain its own real‐valued time coordinate. While existing work has demonstrated clear advantages for this approach, they come at a running time cost. We investigate the problem of accelerating event‐based layout to make it more competitive with existing layout techniques. In this paper, we describe the design, implementation and experimental evaluation of , the first multi‐level event‐based graph layout algorithm. We consider three operators for coarsening and placement, inspired by Walshaw, GRIP and FM, which we couple with an event‐based graph drawing algorithm. We also propose two extensions to the core algorithm: and . We perform two experiments: first, we compare variants to existing state‐of‐the‐art dynamic graph layout approaches; second, we investigate the impact of each of the proposed algorithm extensions. proves to be competitive with existing approaches, and the proposed extensions achieve their design goals and contribute in opening new research directions.Item Transition Motion Synthesis for Object Interaction based on Learning Transition Strategies(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hwang, Jaepyung; Park, Gangrae; Kwon, Taesoo; Ishii, Shin; Hauser, Helwig and Alliez, PierreIn this study, we focus on developing a motion synthesis framework that generates a natural transition motion between two different behaviours to interact with a moving object. Specifically, the proposed framework generates the transition motion, bridging from a locomotive behaviour to an object interaction behaviour. And, the transition motion should adapt to the spatio‐temporal variation of the target object in an online manner, so as to naturally connect the behaviours. To solve this issue, we propose a framework that combines a regression model and a transition motion planner. The neural network‐based regression model estimates the reference transition strategy to guide the reference pattern of the transitioning, adapted to the varying situation. The transition motion planner reconstructs the transition motion based on the reference pattern while considering dynamic constraints that avoid the footskate and interaction constraints. The proposed framework is validated to synthesize various transition motions while adapting to the spatio‐temporal variation of the object by using object grasping motion, and athletic motions in soccer.Item Fast Neural Representations for Direct Volume Rendering(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Weiss, S.; Hermüller, P.; Westermann, R.; Hauser, Helwig and Alliez, PierreDespite the potential of neural scene representations to effectively compress 3D scalar fields at high reconstruction quality, the computational complexity of the training and data reconstruction step using scene representation networks limits their use in practical applications. In this paper, we analyse whether scene representation networks can be modified to reduce these limitations and whether such architectures can also be used for temporal reconstruction tasks. We propose a novel design of scene representation networks using GPU tensor cores to integrate the reconstruction seamlessly into on‐chip raytracing kernels, and compare the quality and performance of this network to alternative network‐ and non‐network‐based compression schemes. The results indicate competitive quality of our design at high compression rates, and significantly faster decoding times and lower memory consumption during data reconstruction. We investigate how density gradients can be computed using the network and show an extension where density, gradient and curvature are predicted jointly. As an alternative to spatial super‐resolution approaches for time‐varying fields, we propose a solution that builds upon latent‐space interpolation to enable random access reconstruction at arbitrary granularity. We summarize our findings in the form of an assessment of the strengths and limitations of scene representation networks for compression domain volume rendering, and outline future research directions. Source code:Item Gaussian Process for Radiance Functions on the S2$\mathbb {S}^2$ Sphere(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Marques, R.; Bouville, C.; Bouatouch, K.; Hauser, Helwig and Alliez, PierreEfficient approximation of incident radiance functions from a set of samples is still an open problem in physically based rendering. Indeed, most of the computing power required to synthesize a photo‐realistic image is devoted to collecting samples of the incident radiance function, which are necessary to provide an estimate of the rendering equation solution. Due to the large number of samples required to reach a high‐quality estimate, this process is usually tedious and can take up to several days. In this paper, we focus on the problem of approximation of incident radiance functions on the sphere. To this end, we resort to a Gaussian Process (GP), a highly flexible function modelling tool, which has received little attention in rendering. We make an extensive analysis of the application of GPs to incident radiance functions, addressing crucial issues such as robust hyperparameter learning, or selecting the covariance function which better suits incident radiance functions. Our analysis is both theoretical and experimental. Furthermore, it provides a seamless connection between the original spherical domain and the spectral domain, on which we build to derive a method for fast computation and rotation of spherical harmonics coefficients.