41-Issue 6
Permanent URI for this collection
Browse
Browsing 41-Issue 6 by Title
Now showing 1 - 20 of 32
Results Per Page
Sort Options
Item Computing Schematic Layouts for Spatial Hypergraphs on Concentric Circles and Grids(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Bekos, M.A.; Dekker, D.J.C.; Frank, F.; Meulemans, W.; Rodgers, P.; Schulz, A.; Wessel, S.; Hauser, Helwig and Alliez, PierreSet systems can be visualized in various ways. An important distinction between techniques is whether the elements have a spatial location that is to be used for the visualization; for example, the elements are cities on a map. Strictly adhering to such location may severely limit the visualization and force overlay, intersections and other forms of clutter. On the other hand, completely ignoring the spatial dimension omits information and may hide spatial patterns in the data. We study layouts for set systems (or hypergraphs) in which spatial locations are displaced onto concentric circles or a grid, to obtain schematic set visualizations. We investigate the tractability of the underlying algorithmic problems adopting different optimization criteria (e.g. crossings or bends) for the layout structure, also known as the support of the hypergraph. Furthermore, we describe a simulated‐annealing approach to heuristically optimize a combination of such criteria. Using this method in computational experiments, we explore the trade‐offs and dependencies between criteria for computing high‐quality schematic set visualizations.Item CVFont: Synthesizing Chinese Vector Fonts via Deep Layout Inferring(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Lian, Zhouhui; Gao, Yichen; Hauser, Helwig and Alliez, PierreCreating a high‐quality Chinese vector font library, which can be directly used in real applications is time‐consuming and costly, since the font library typically consists of large amounts of vector glyphs. To address this problem, we propose a data‐driven system in which only a small number (about 10%) of Chinese glyphs need to be designed. Specifically, the system first automatically decomposes those input glyphs into vector components. Then, a layout prediction module based on deep neural networks is applied to learn the layout style of input characters. Finally, proper components are selected to assemble the glyph of each unseen character based on the predicted layout to build the font library that can be directly used in computers and smart mobile devices. Experimental results demonstrate that our system synthesizes high‐quality glyphs and significantly enhances the producing efficiency of Chinese vector fonts.Item Delaunay Painting: Perceptual Image Colouring from Raster Contours with Gaps(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Parakkat, Amal Dev; Memari, Pooran; Cani, Marie‐Paule; Hauser, Helwig and Alliez, PierreWe introduce Delaunay Painting, a novel and easy‐to‐use method to flat‐colour contour‐sketches with gaps. Starting from a Delaunay triangulation of the input contours, triangles are iteratively filled with the appropriate colours, thanks to the dynamic update of flow values calculated from colour hints. Aesthetic finish is then achieved, through energy minimisation of contour‐curves and further heuristics enforcing the appropriate sharp corners. To be more efficient, the user can also make use of our colour diffusion framework, which automatically extends colouring to small, internal regions such as those delimited by hatches. The resulting method robustly handles input contours with strong gaps. As an interactive tool, it minimizes user's efforts and enables any colouring strategy, as the result does not depend on the order of interactions. We also provide an automatized version of the colouring strategy for quick segmentation of contours images, that we illustrate with applications to medical imaging and sketch segmentation.Item Erratum: Evaluating Data‐type Heterogeneity in Interactive Visual Analyses with Parallel Axes(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hauser, Helwig and Alliez, PierreItem Error Analysis of Photometric Stereo with Near Quasi‐Point Lights(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Chen, Q.; Ren, Y.; Zhao, Z.; Tao, W.; Zhao, H.; Hauser, Helwig and Alliez, PierreThe shape recovery quality of photometric stereo is sensitive to complicated error factors under close range lighting of quasi‐point lights. However, the error performance of photometric stereo under this practical scenario is still obscure. This paper presents a comprehensive error analysis of photometric stereo with near quasi‐point lights (NQPL‐PS). Five main error factors are identified under this scenario and their corresponding analytical formulations are introduced. Statistic computation and experiments are used to validate the theoretical formulations and inspect the relationships between normal inaccuracies and each type of discrepancies. In addition, the impacts of multiple system parameters of an NQPL‐PS configuration on the normal estimation error are also studied. In order to evaluate the relative importance of various error factors, a probability‐based evaluation criterion is proposed, which focuses on the error performance over the state space and the error space, rather than the simple comparison of the values of normal inaccuracy. The assessment results show that the non‐uniformity of illuminants, and the calibration error in the position of light sources hold the dominant places among those five error factors. This paper provides insights for the accuracy improvement and system design of NQPL‐PS.Item Event‐based Dynamic Graph Drawing without the Agonizing Pain(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Arleo, A.; Miksch, S.; Archambault, D.; Hauser, Helwig and Alliez, PierreTemporal networks can naturally model real‐world complex phenomena such as contact networks, information dissemination and physical proximity. However, nodes and edges bear real‐time coordinates, making it difficult to organize them into discrete timeslices, without a loss of temporal information due to projection. Event‐based dynamic graph drawing rejects the notion of a timeslice and allows each node and edge to retain its own real‐valued time coordinate. While existing work has demonstrated clear advantages for this approach, they come at a running time cost. We investigate the problem of accelerating event‐based layout to make it more competitive with existing layout techniques. In this paper, we describe the design, implementation and experimental evaluation of , the first multi‐level event‐based graph layout algorithm. We consider three operators for coarsening and placement, inspired by Walshaw, GRIP and FM, which we couple with an event‐based graph drawing algorithm. We also propose two extensions to the core algorithm: and . We perform two experiments: first, we compare variants to existing state‐of‐the‐art dynamic graph layout approaches; second, we investigate the impact of each of the proposed algorithm extensions. proves to be competitive with existing approaches, and the proposed extensions achieve their design goals and contribute in opening new research directions.Item Evocube: A Genetic Labelling Framework for Polycube‐Maps(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Dumery, C.; Protais, F.; Mestrallet, S.; Bourcier, C.; Ledoux, F.; Hauser, Helwig and Alliez, PierrePolycube‐maps are used as base‐complexes in various fields of computational geometry, including the generation of regular all‐hexahedral meshes free of internal singularities. However, the strict alignment constraints behind polycube‐based methods make their computation challenging for CAD models used in numerical simulation via finite element method (FEM). We propose a novel approach based on an evolutionary algorithm to robustly compute polycube‐maps in this context.We address the labelling problem, which aims to precompute polycube alignment by assigning one of the base axes to each boundary face on the input. Previous research has described ways to initialize and improve a labelling via greedy local fixes. However, such algorithms lack robustness and often converge to inaccurate solutions for complex geometries. Our proposed framework alleviates this issue by embedding labelling operations in an evolutionary heuristic, defining fitness, crossover, and mutations in the context of labelling optimization. We evaluate our method on a thousand smooth and CAD meshes, showing Evocube converges to accurate labellings on a wide range of shapes. The limitations of our method are also discussed thoroughly.Item Fast Neural Representations for Direct Volume Rendering(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Weiss, S.; Hermüller, P.; Westermann, R.; Hauser, Helwig and Alliez, PierreDespite the potential of neural scene representations to effectively compress 3D scalar fields at high reconstruction quality, the computational complexity of the training and data reconstruction step using scene representation networks limits their use in practical applications. In this paper, we analyse whether scene representation networks can be modified to reduce these limitations and whether such architectures can also be used for temporal reconstruction tasks. We propose a novel design of scene representation networks using GPU tensor cores to integrate the reconstruction seamlessly into on‐chip raytracing kernels, and compare the quality and performance of this network to alternative network‐ and non‐network‐based compression schemes. The results indicate competitive quality of our design at high compression rates, and significantly faster decoding times and lower memory consumption during data reconstruction. We investigate how density gradients can be computed using the network and show an extension where density, gradient and curvature are predicted jointly. As an alternative to spatial super‐resolution approaches for time‐varying fields, we propose a solution that builds upon latent‐space interpolation to enable random access reconstruction at arbitrary granularity. We summarize our findings in the form of an assessment of the strengths and limitations of scene representation networks for compression domain volume rendering, and outline future research directions. Source code:Item Gaussian Process for Radiance Functions on the S2$\mathbb {S}^2$ Sphere(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Marques, R.; Bouville, C.; Bouatouch, K.; Hauser, Helwig and Alliez, PierreEfficient approximation of incident radiance functions from a set of samples is still an open problem in physically based rendering. Indeed, most of the computing power required to synthesize a photo‐realistic image is devoted to collecting samples of the incident radiance function, which are necessary to provide an estimate of the rendering equation solution. Due to the large number of samples required to reach a high‐quality estimate, this process is usually tedious and can take up to several days. In this paper, we focus on the problem of approximation of incident radiance functions on the sphere. To this end, we resort to a Gaussian Process (GP), a highly flexible function modelling tool, which has received little attention in rendering. We make an extensive analysis of the application of GPs to incident radiance functions, addressing crucial issues such as robust hyperparameter learning, or selecting the covariance function which better suits incident radiance functions. Our analysis is both theoretical and experimental. Furthermore, it provides a seamless connection between the original spherical domain and the spectral domain, on which we build to derive a method for fast computation and rotation of spherical harmonics coefficients.Item Harmonics Virtual Lights: Fast Projection of Luminance Field on Spherical Harmonics for Efficient Rendering(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Mézières, Pierre; Desrichard, François; Vanderhaeghe, David; Paulin, Mathias; Hauser, Helwig and Alliez, PierreIn this paper, we introduce harmonics virtual lights (HVL), to model indirect light sources for interactive global illumination of dynamic 3D scenes. Virtual point lights (VPL) are an efficient approach to define indirect light sources and to evaluate the resulting indirect lighting. Nonetheless, VPL suffer from disturbing artefacts, especially with high‐frequency materials. Virtual spherical lights (VSL) avoid these artefacts by considering spheres instead of points but estimates the lighting integral using Monte‐Carlo which results to noise in the final image. We define HVL as an extension of VSL in a spherical harmonics (SH) framework, defining a closed form of the lighting integral evaluation. We propose an efficient SH projection of spherical lights contribution faster than existing methods. Computing the outgoing luminance requires operations when using materials with circular symmetric lobes, and operations for the general case, where is the number of SH bands. HVL can be used with either parametric or measured BRDF without extra cost and offers control over rendering time and image quality, by either decreasing or increasing the band limit used for SH projection. Our approach is particularly well‐designed to render medium‐frequency one‐bounce global illumination with arbitrary BRDF at an interactive frame rate.Item Image Representation on Curved Optimal Triangulation(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Xiao, Yanyang; Cao, Juan; Chen, Zhonggui; Hauser, Helwig and Alliez, PierreImage triangulation aims to generate an optimal partition with triangular elements to represent the given image. One bottleneck in ensuring approximation quality between the original image and a piecewise approximation over the triangulation is the inaccurate alignment of straight edges to the curved features. In this paper, we propose a novel variational method called curved optimal triangulation, where not all edges are straight segments, but may also be quadratic Bézier curves. The energy function is defined as the total approximation error determined by vertex locations, connectivity and bending of edges. The gradient formulas of this function are derived explicitly in closed form to optimize the energy function efficiently. We test our method on several models to demonstrate its efficacy and ability in preserving features. We also explore its applications in the automatic generation of stylization and Lowpoly images. With the same number of vertices, our curved optimal triangulation method generates more accurate and visually pleasing results compared with previous methods that only use straight segments.Item Issue Information(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hauser, Helwig and Alliez, PierreItem JOKR: Joint Keypoint Representation for Unsupervised Video Retargeting(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Mokady, R.; Tzaban, R.; Benaim, S.; Bermano, A.H.; Cohen‐Or, D.; Hauser, Helwig and Alliez, PierreIn unsupervised video retargeting, content is transferred from one video to another while preserving the original appearance and style, without any additional annotations. While this challenge has seen substantial advancements through the use of deep neural networks, current methods struggle when the source and target videos are of shapes that are different in limb lengths or other body proportions. In this work, we consider this task for the case of objects of different shapes and appearances, that consist of similar skeleton connectivity and depict similar motion. We introduce JOKR—a JOint Keypoint Representation that captures the geometry common to both videos, while being disentangled from their unique styles. Our model first extracts unsupervised keypoints from the given videos. From this representation, two decoders reconstruct geometry and appearance, one for each of the input sequences. By employing an affine‐invariant domain confusion term over the keypoints bottleneck, we enforce the unsupervised keypoint representations of both videos to be indistinguishable. This encourages the aforementioned disentanglement between motion and appearance, mapping similar poses from both domains to the same representation. This allows yielding a sequence with the appearance and style of one video, but the content of the other. Our applicability is demonstrated through challenging video pairs compared to state‐of‐the‐art methods. Furthermore, we demonstrate that this geometry‐driven representation enables intuitive control, such as temporal coherence and manual pose editing. Videos can be viewed in the supplement HTML.Item Learning Human Viewpoint Preferences from Sparsely Annotated Models(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Hartwig, S.; Schelling, M.; Onzenoodt, C. v.; Vázquez, P.‐P.; Hermosilla, P.; Ropinski, T.; Hauser, Helwig and Alliez, PierreView quality measures compute scores for given views and are used to determine an optimal view in viewpoint selection tasks. Unfortunately, despite the wide adoption of these measures, they are rather based on computational quantities, such as entropy, than human preferences. To instead tailor viewpoint measures towards humans, view quality measures need to be able to capture human viewpoint preferences. Therefore, we introduce a large‐scale crowdsourced data set, which contains 58 annotated viewpoints for 3220 ModelNet40 models. Based on this data, we derive a neural view quality measure abiding to human preferences. We further demonstrate that this view quality measure not only generalizes to models unseen during training, but also to unseen model categories. We are thus able to predict view qualities for single images, and directly predict human preferred viewpoints for 3D models by exploiting point‐based learning technology, without requiring to generate intermediate images or sampling the view sphere. We will detail our data collection procedure, describe the data analysis and model training and will evaluate the predictive quality of our trained viewpoint measure on unseen models and categories. To our knowledge, this is the first deep learning approach to predict a view quality measure solely based on human preferences.Item Narrow‐Band Screen‐Space Fluid Rendering(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Oliveira, Felipe; Paiva, Afonso; Hauser, Helwig and Alliez, PierreThis paper presents a novel and practical screen‐space liquid rendering for particle‐based fluids for real‐time applications. Our rendering pipeline performs particle filtering only in a narrow‐band around the boundary particles to provide a smooth liquid surface with volumetric rendering effects. We also introduce a novel boundary detection method allowing the user to select particle layers from the liquid interface. The proposed approach is simple, fast, memory‐efficient, easy to code and it can be adapted straightforwardly in the standard screen‐space rendering methods, even in GPU architectures. We show through a set of experiments how the prior screen‐space techniques can be benefited and improved by our approach.Item NeRF‐Tex: Neural Reflectance Field Textures(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Baatz, H.; Granskog, J.; Papas, M.; Rousselle, F.; Novák, J.; Hauser, Helwig and Alliez, PierreWe investigate the use of neural fields for modelling diverse mesoscale structures, such as fur, fabric and grass. Instead of using classical graphics primitives to model the structure, we propose to employ a versatile volumetric primitive represented by a neural field (NeRF‐Tex), which jointly models the geometry of the material and its response to lighting. The NeRF‐Tex primitive can be instantiated over a base mesh to ‘texture’ it with the desired meso and microscale appearance. We condition the reflectance field on user‐defined parameters that control the appearance. A single NeRF texture thus captures an entire space of reflectance fields rather than one specific structure. This increases the gamut of appearances that can be modelled and provides a solution for combating repetitive texturing artifacts. We also demonstrate that NeRF textures naturally facilitate continuous level‐of‐detail rendering. Our approach unites the versatility and modelling power of neural networks with the artistic control needed for precise modelling of virtual scenes. While all our training data are currently synthetic, our work provides a recipe that can be further extended to extract complex, hard‐to‐model appearances from real images.Item Non‐Isometric Shape Matching via Functional Maps on Landmark‐Adapted Bases(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Panine, Mikhail; Kirgo, Maxime; Ovsjanikov, Maks; Hauser, Helwig and Alliez, PierreWe propose a principled approach for non‐isometric landmark‐preserving non‐rigid shape matching. Our method is based on the functional map framework, but rather than promoting isometries we focus on near‐conformal maps that preserve landmarks exactly. We achieve this, first, by introducing a novel landmark‐adapted basis using an intrinsic Dirichlet‐Steklov eigenproblem. Second, we establish the functional decomposition of conformal maps expressed in this basis. Finally, we formulate a conformally‐invariant energy that promotes high‐quality landmark‐preserving maps, and show how it can be optimized via a variant of the recently proposed ZoomOut method that we extend to our setting. Our method is descriptor‐free, efficient and robust to significant mesh variability. We evaluate our approach on a range of benchmark datasets and demonstrate state‐of‐the‐art performance on non‐isometric benchmarks and near state‐of‐the‐art performance on isometric ones.Item Quad‐fisheye Image Stitching for Monoscopic Panorama Reconstruction(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Cheng, Haojie; Xu, Chunxiao; Wang, Jiajun; Zhao, Lingxiao; Hauser, Helwig and Alliez, PierreMonoscopic panorama provides the display of omnidirectional contents surrounding the viewer. An increasingly popular way to reconstruct a panorama is to stitch a collection of fisheye images. However, such non‐planar views may result in problems such as distortions and boundary irregularities. In most cases, the computational expense for stitching non‐planar images is also too high to satisfy real‐time applications. In this paper, a novel monoscopic panorama reconstruction pipeline that produces better quad‐fisheye image stitching results for omnidirectional environment viewing is proposed. The main idea is to apply mesh deformation for image alignment. To optimize inter‐lens parallaxes, unwarped images are firstly cropped and reshuffled to facilitate the circular environment scene composition by the seamless ring‐connection of the panorama borders. Several mesh constraints are then adopted to ensure a high alignment accuracy. After alignment, the boundary of the result is rectified to be rectangular to prevent gapping artefacts. We further extend our approach to video stitching. The temporal smoothness model is added to prevent unexpected artefacts in the panoramic videos. To support interactive applications, our stitching algorithm is programmed using CUDA. The camera motion and average gradient per video frame are further calculated to accelerate for synchronous real‐life panoramic scene reconstruction and visualization. Experimental results demonstrate that our method has advantages in respects of alignment accuracy, adaptability and image quality of the stitching result.Item Real‐Time FE Simulation for Large‐Scale Problems Using Precondition‐Based Contact Resolution and Isolated DOFs Constraints(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Zeng, Z.; Cotin, S.; Courtecuisse, H.; Hauser, Helwig and Alliez, PierreThis paper presents a fast method to compute large‐scale problems in real‐time finite element simulations in the presence of contact and friction. The approach uses a precondition‐based contact resolution that performs a Cholesky decomposition at low frequency. On exploiting the sparsity in assembled matrices, we propose a reduced and parallel computation scheme to address the expensive computation of the Schur‐complement arisen by detailed mesh and accurate contact response. An efficient GPU‐based solver is developed to parallelise the computation, making it possible to provide real‐time simulations in the presence of coupled constraints for contact and friction response. In addition, the pre‐conditioner is updated at low frequency, implying reuse of the factorised system. To benefit a further speedup, we propose a strategy to share the resolution information between consecutive time steps. We evaluate the performance of our method in different contact applications and compare it with typical approaches on CPU and GPU.Item Reconstructing Recognizable 3D Face Shapes based on 3D Morphable Models(© 2022 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2022) Jiang, Diqiong; Jin, Yiwei; Zhang, Fang‐Lue; Lai, Yu‐Kun; Deng, Risheng; Tong, Ruofeng; Tang, Min; Hauser, Helwig and Alliez, PierreMany recent works have reconstructed distinctive 3D face shapes by aggregating shape parameters of the same identity and separating those of different people based on parametric models (e.g. 3D morphable models (3DMMs)). However, despite the high accuracy in the face recognition task using these shape parameters, the visual discrimination of face shapes reconstructed from those parameters remains unsatisfactory. Previous works have not answered the following research question: Do discriminative shape parameters guarantee visual discrimination in represented 3D face shapes? This paper analyses the relationship between shape parameters and reconstructed shape geometry, and proposes a novel shape identity‐aware regularization (SIR) loss for shape parameters, aiming at increasing discriminability in both the shape parameter and shape geometry domains. Moreover, to cope with the lack of training data containing both landmark and identity annotations, we propose a network structure and an associated training strategy to leverage mixed data containing either identity or landmark labels. In addition, since face recognition accuracy does not mean the recognizability of reconstructed face shapes from the shape parameters, we propose the SIR metric to measure the discriminability of face shapes. We compare our method with existing methods in terms of the reconstruction error, visual discriminability, and face recognition accuracy of the shape parameters and SIR metric. Experimental results show that our method outperforms the state‐of‐the‐art methods. The code will be released at .