37-Issue 6
Permanent URI for this collection
Browse
Browsing 37-Issue 6 by Issue Date
Now showing 1 - 20 of 27
Results Per Page
Sort Options
Item Direct Position‐Based Solver for Stiff Rods(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Deul, Crispin; Kugelstadt, Tassilo; Weiler, Marcel; Bender, Jan; Chen, Min and Benes, BedrichIn this paper, we present a novel direct solver for the efficient simulation of stiff, inextensible elastic rods within the position‐based dynamics (PBD) framework. It is based on the XPBD algorithm, which extends PBD to simulate elastic objects with physically meaningful material parameters. XPBD approximates an implicit Euler integration and solves the system of non‐linear equations using a non‐linear Gauss–Seidel solver. However, this solver requires many iterations to converge for complex models and if convergence is not reached, the material becomes too soft. In contrast, we use Newton iterations in combination with our direct solver to solve the non‐linear equations which significantly improves convergence by solving all constraints of an acyclic structure (tree), simultaneously. Our solver only requires a few Newton iterations to achieve high stiffness and inextensibility. We model inextensible rods and trees using rigid segments connected by constraints. Bending and twisting constraints are derived from the well‐established Cosserat model. The high performance of our solver is demonstrated in highly realistic simulations of rods consisting of multiple 10 000 segments. In summary, our method allows the efficient simulation of stiff rods in the PBD framework with a speedup of two orders of magnitude compared to the original XPBD approach.We present a novel direct solver for the efficient simulation of stiff, inextensible elastic rods. It is based on the XPBD algorithm, which extends Position‐Based Dynamics to simulate elastic objects with physically meaningful material parameters. However, the non‐linear Gauss‐Seidel solver of XPBD requires many iterations to converge for complex models and if convergence is not reached, the material becomes too soft. In contrast, we use Newton iterations in combination with our direct solver which significantly improves convergence by solving all constraints of an acyclic structure simultaneously. We model rods using rigid segments connected by constraints. Bending and twisting constraints are derived from the Cosserat model. The high performance of our solver allows the simulation of rods consisting of multiple 10 000 segments with a speedup of two orders of magnitude compared to the original XPBD approach.Item Field‐Aligned Isotropic Surface Remeshing(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Du, Xingyi; Liu, Xiaohan; Yan, Dong‐Ming; Jiang, Caigui; Ye, Juntao; Zhang, Hui; Chen, Min and Benes, BedrichWe present a novel isotropic surface remeshing algorithm that automatically aligns the mesh edges with an underlying directional field. The alignment is achieved by minimizing an energy function that combines both centroidal Voronoi tessellation (CVT) and the penalty enforced by a six‐way rotational symmetry field. The CVT term ensures uniform distribution of the vertices and high remeshing quality, and the field constraint enforces the directional alignment of the edges. Experimental results show that the proposed approach has the advantages of isotropic and field‐aligned remeshing. Our algorithm is superior to the representative state‐of‐the‐art approaches in various aspects.We present a novel isotropic surface remeshing algorithm that automatically aligns the mesh edges with an underlying directional field. The alignment is achieved by minimizing an energy function that combines both centroidal Voronoi tessellation (CVT) and the penalty enforced by a six‐way rotational symmetry field. The CVT term ensures uniform distribution of the vertices and high remeshing quality, and the field constraint enforces the directional alignment of the edges. Experimental results show that the proposed approach has the advantages of isotropic and field‐aligned remeshing. Our algorithm is superior to the representative state‐of‐the‐art approaches in various aspects.Item The State of the Art in Vortex Extraction(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Günther, Tobias; Theisel, Holger; Chen, Min and Benes, BedrichVortices are commonly understood as rotating motions in fluid flows. The analysis of vortices plays an important role in numerous scientific applications, such as in engineering, meteorology, oceanology, medicine and many more. The successful analysis consists of three steps: vortex definition, extraction and visualization. All three have a long history, and the early themes and topics from the 1970s survived to this day, namely, the identification of vortex cores, their extent and the choice of suitable reference frames. This paper provides an overview over the advances that have been made in the last 40 years. We provide sufficient background on differential vector field calculus, extraction techniques like critical point search and the parallel vectors operator, and we introduce the notion of reference frame invariance. We explain the most important region‐based and line‐based methods, integration‐based and geometry‐based approaches, recent objective techniques, the selection of reference frames by means of flow decompositions, as well as a recent local optimization‐based technique. We point out relationships between the various approaches, classify the literature and identify open problems and challenges for future work.Vortices are commonly understood as rotating motions in fluid flows. The analysis of vortices plays an important role in numerous scientific applications, such as in engineering, meteorology, oceanology, medicine and many more. The successful analysis consists of three steps: vortex definition, extraction and visualization. All three have a long history, and the early themes and topics from the 1970s survived to this day, namely, the identification of vortex cores, their extent and the choice of suitable reference frames.Item Reproducing Spectral Reflectances From Tristimulus Colours(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Otsu, H.; Yamamoto, M.; Hachisuka, T.; Chen, Min and Benes, BedrichPhysically based rendering systems often support spectral rendering to simulate light transport in the real world. Material representations in such simulations need to be defined as spectral distributions. Since commonly available material data are in tristimulus colours, we ideally would like to obtain spectral distributions from tristimulus colours as an input to spectral rendering systems. Reproduction of spectral distributions given tristimulus colours, however, has been considered an ill‐posed problem since single tristimulus colour corresponds to a set of different spectra due to metamerism. We show how to resolve this problem using a data‐driven approach based on measured spectra and propose a practical algorithm that can faithfully reproduce a corresponding spectrum only from the given tristimulus colour. The key observation in colour science is that a natural measured spectrum is usually well approximated by a weighted sum of a few basis functions. We show how to reformulate conversion of tristimulus colours to spectra via principal component analysis. To improve accuracy of conversion, we propose a greedy clustering algorithm which minimizes reconstruction error. Using pre‐computation, the runtime computation is just a single matrix multiplication with an input tristimulus colour. Numerical experiments show that our method well reproduces the reference measured spectra using only the tristimulus colours as input.Physically based rendering systems often support spectral rendering to simulate light transport in the real world. Material representations in such simulations need to be defined as spectral distributions. Since commonly available material data are in tristimulus colours, we ideally would like to obtain spectral distributions from tristimulus colours as an input to spectral rendering systems. Reproduction of spectral distributions given tristimulus colours, however, has been considered an ill‐posed problem since single tristimulus colour corresponds to a set of different spectra due to metamerism. We show how to resolve this problem using a data‐driven approach based on measured spectra and propose a practical algorithm that can faithfully reproduce a corresponding spectrum only from the given tristimulus colour.Item Laplace–Beltrami Operator on Point Clouds Based on Anisotropic Voronoi Diagram(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Qin, Hongxing; Chen, Yi; Wang, Yunhai; Hong, Xiaoyang; Yin, Kangkang; Huang, Hui; Chen, Min and Benes, BedrichThe symmetrizable and converged Laplace–Beltrami operator () is an indispensable tool for spectral geometrical analysis of point clouds. The , introduced by Liu et al. [LPG12] is guaranteed to be symmetrizable, but its convergence degrades when it is applied to models with sharp features. In this paper, we propose a novel , which is not only symmetrizable but also can handle the point‐sampled surface containing significant sharp features. By constructing the anisotropic Voronoi diagram in the local tangential space, the can be well constructed for any given point. To compute the area of anisotropic Voronoi cell, we introduce an efficient approximation by projecting the cell to the local tangent plane and have proved its convergence. We present numerical experiments that clearly demonstrate the robustness and efficiency of the proposed for point clouds that may contain noise, outliers, and non‐uniformities in thickness and spacing. Moreover, we can show that its spectrum is more accurate than the ones from existing for scan points or surfaces with sharp features.The symmetrizable and converged Laplace–Beltrami operator () is an indispensable tool for spectral geometrical analysis of point clouds. The , introduced by Liu et al. [LPG12] is guaranteed to be symmetrizable, but its convergence degrades when it is applied to models with sharp features. In this paper, we propose a novel , which is not only symmetrizable but also can handle the point‐sampled surface containing significant sharp features. By constructing the anisotropic Voronoi diagram in the local tangential space, the can be well constructed for any given point. To compute the area of anisotropic Voronoi cell, we introduce an efficient approximation by projecting the cell to the local tangent plane and have proved its convergence. We present numerical experiments that clearly demonstrate the robustness and efficiency of the proposed for point clouds that may contain noise, outliers, and non‐uniformities in thickness and spacing.Item Sketching in Gestalt Space: Interactive Shape Abstraction through Perceptual Reasoning(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kratt, J.; Niese, T.; Hu, R.; Huang, H.; Pirk, S.; Sharf, A.; Cohen‐Or, D.; Deussen, O.; Chen, Min and Benes, BedrichWe present an interactive method that allows users to easily abstract complex 3D models with only a few strokes. The key idea is to employ well‐known Gestalt principles to help generalizing user inputs into a full model abstraction while accounting for form, perceptual patterns and semantics of the model. Using these principles, we alleviate the user's need to explicitly define shape abstractions. We utilize structural characteristics such as repetitions, regularity and similarity to transform user strokes into full 3D abstractions. As the user sketches over shape elements, we identify Gestalt groups and later abstract them to maintain their structural meaning. Unlike previous approaches, we operate directly on the geometric elements, in a sense applying Gestalt principles in 3D. We demonstrate the effectiveness of our approach with a series of experiments, including a variety of complex models and two extensive user studies to evaluate our framework.We present an interactive method that allows users to easily abstract complex 3D models with only a few strokes. The key idea is to employ well‐known Gestalt principles to help generalizing user inputs into a full model abstraction while accounting for form, perceptual patterns and semantics of the model. Using these principles, we alleviate the user's need to explicitly define shape abstractions. We utilize structural characteristics such as repetitions, regularity and similarity to transform user strokes into full 3D abstractions. As the user sketches over shape elements, we identify Gestalt groups and later abstract them to maintain their structural meaning. Unlike previous approaches, we operate directly on the geometric elements, in a sense applying Gestalt principles in 3D. We demonstrate the effectiveness of our approach with a series of experiments, including a variety of complex models and two extensive user studies to evaluate our framework.Item On‐The‐Fly Tracking of Flame Surfaces for the Visual Analysis of Combustion Processes(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Oster, T.; Abdelsamie, A.; Motejat, M.; Gerrits, T.; Rössl, C.; Thévenin, D.; Theisel, H.; Chen, Min and Benes, BedrichThe visual analysis of combustion processes is one of the challenges of modern flow visualization. In turbulent combustion research, the behaviour of the flame surface contains important information about the interactions between turbulence and chemistry. The extraction and tracking of this surface is crucial for understanding combustion processes. This is impossible to realize as a post‐process because of the size of the involved datasets, which are too large to be stored on disk. We present an on‐the‐fly method for tracking the flame surface directly during simulation and computing the local tangential surface deformation for arbitrary time intervals. In a massively parallel simulation, the data are distributed over many processes and only a single time step is in memory at any time. To satisfy the demands on parallelism and accuracy posed by this situation, we track the surface with independent micro‐patches and adapt their distribution as needed to maintain numerical stability. With our method, we enable combustion researchers to observe the detailed movement and deformation of the flame surface over extended periods of time and thus gain novel insights into the mechanisms of turbulence–chemistry interactions. We validate our method on analytic ground truth data and show its applicability on two real‐world simulations.The visual analysis of combustion processes is one of the challenges of modern flow visualization. processes is one of the challenges of modern flow visualization. In turbulent combustion research, the behaviour of the flame surface contains important information about the interactions between turbulence and chemistry. The extraction and tracking of this surface is crucial for understanding combustion processes. This is impossible to realize as a post‐process because of the size of the involved datasets, which are too large to be stored on disk. We present an on‐the‐fly method for tracking the flame surface directly during simulation and computing the local tangential surface deformation for arbitrary time intervals. In a massively parallel simulation, the data are distributed over many processes and only a single time step is in memory at any time. To satisfy the demands on parallelism and accuracy posed by this situation, we track the surface with independent micro‐patches and adapt their distribution as needed to maintain numerical stability.Item Temporally Consistent Motion Segmentation From RGB‐D Video(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Bertholet, P.; Ichim, A.E.; Zwicker, M.; Chen, Min and Benes, BedrichTemporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the continuous, high dimensional space of motions, where the discrete motion set provides an upper bound for the original energy. We repeatedly minimize our energy, and in each step extend and refine the motion set to further lower the bound. A quantitative comparison to the current state of the art demonstrates the benefits of our approach in difficult scenarios.Temporally consistent motion segmentation from RGB‐D videos is challenging because of the limitations of current RGB‐D sensors. We formulate segmentation as a motion assignment problem, where a motion is a sequence of rigid transformations through all frames of the input. We capture the quality of each potential assignment by defining an appropriate energy function that accounts for occlusions and a sensor‐specific noise model. To make energy minimization tractable, we work with a discrete set instead of the continuous, high dimensional space of motions, where the discrete motion set provides an upper bound for the original energy. We repeatedly minimize our energy, and in each step extend and refine the motion set to further lower the bound. A quantitative comparison to the current state of the art demonstrates the benefits of our approach in difficult scenarios.Item A New Class of Guided C2 Subdivision Surfaces Combining Good Shape with Nested Refinement(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Karčiauskas, Kęstutis; Peters, Jörg; Chen, Min and Benes, BedrichConverting quadrilateral meshes to smooth manifolds, guided subdivision offers a way to combine the good highlight line distribution of recent G‐spline constructions with the refinability of subdivision surfaces. This avoids the complex refinement of G‐spline constructions and the poor shape of standard subdivision. Guided subdivision can then be used both to generate the surface and hierarchically compute functions on the surface. Specifically, we present a subdivision algorithm of polynomial degree bi‐6 and a curvature bounded algorithm of degree bi‐5. We prove that the common eigenstructure of this class of subdivision algorithms is determined by their guide and demonstrate that their eigenspectrum (speed of contraction) can be adjusted without harming the shape. For practical implementation, a finite number of subdivision steps can be completed by a high‐quality cap. Near irregular points this allows leveraging standard polynomial tools both for rendering of the surface and for approximately integrating functions on the surface.Converting quadrilateral meshes to smooth manifolds, guided subdivision offers a way to combine the good highlight line distribution of recent G‐spline constructions with the refinability of subdivision surfaces.This avoids the complex refinement of G‐spline constructions and the poor shape of standard subdivision. Guided subdivision can then be used both to generate the surface and hierarchically compute functions on the surface. Specifically, we present a subdivision algorithm of polynomial degree bi‐6 and a curvature bounded algorithm of degree bi‐5.Item Re‐Weighting Firefly Samples for Improved Finite‐Sample Monte Carlo Estimates(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Zirr, Tobias; Hanika, Johannes; Dachsbacher, Carsten; Chen, Min and Benes, BedrichSamples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm. It causes negligible runtime overhead, works in constant memory and is well suited for parallel and progressive rendering. The re‐weighting runs as a fast post‐process, can be controlled interactively and our approach is non‐destructive in that the unbiased result can be reconstructed at any time.Samples with high contribution but low probability density, often called fireflies, occur in all practical Monte Carlo estimators and are part of computing unbiased estimates. For finite‐sample estimates, however, they can lead to excessive variance. Rejecting all samples classified as outliers, as suggested in previous work, leads to estimates that are too low and can cause undesirable artefacts. In this paper, we show how samples can be re‐weighted depending on their contribution and sampling frequency such that the finite‐sample estimate gets closer to the correct expected value and the variance can be controlled. For this, we first derive a theory for how samples should ideally be re‐weighted and that this would require the probability density function of the optimal sampling strategy. As this probability density function is generally unknown, we show how the discrepancy between the optimal and the actual sampling strategy can be estimated and used for re‐weighting in practice. We describe an efficient algorithm that allows for the necessary analysis of per‐pixel sample distributions in the context of Monte Carlo rendering without storing any individual samples, with only minimal changes to the rendering algorithm.Item Localized Manifold Harmonics for Spectral Shape Analysis(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Melzi, S.; Rodolà, E.; Castellani, U.; Bronstein, M. M.; Chen, Min and Benes, BedrichThe use of Laplacian eigenfunctions is ubiquitous in a wide range of computer graphics and geometry processing applications. In particular, Laplacian eigenbases allow generalizing the classical Fourier analysis to manifolds. A key drawback of such bases is their inherently global nature, as the Laplacian eigenfunctions carry geometric and topological structure of the entire manifold. In this paper, we introduce a new framework for local spectral shape analysis. We show how to efficiently construct localized orthogonal bases by solving an optimization problem that in turn can be posed as the eigendecomposition of a new operator obtained by a modification of the standard Laplacian. We study the theoretical and computational aspects of the proposed framework and showcase our new construction on the classical problems of shape approximation and correspondence. We obtain significant improvement compared to classical Laplacian eigenbases as well as other alternatives for constructing localized bases.The use of Laplacian eigenfunctions is ubiquitous in a wide range of computer graphics and geometry processing applications. In particular, Laplacian eigenbases allow generalizing the classical Fourier analysis to manifolds. A key drawback of such bases is their inherently global nature, as the Laplacian eigenfunctions carry geometric and topological structure of the entire manifold. In this paper, we introduce a new framework for local spectral shape analysis. We show how to efficiently construct localized orthogonal bases by solving an optimization problem that in turn can be posed as the eigendecomposition of a new operator obtained by a modification of the standard Laplacian.Item Issue Information(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Chen, Min and Benes, BedrichItem Quantitative and Qualitative Analysis of the Perception of Semi‐Transparent Structures in Direct Volume Rendering(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Englund, R.; Ropinski, T.; Chen, Min and Benes, BedrichDirect Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi‐transparency is facilitated to convey the complexity of the data. Unfortunately, semi‐transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi‐transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi‐transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images. In this study, we investigated the perceptual performance of these techniques and have compared them against each other in a large‐scale quantitative user study with 300 participants. Each participant completed micro‐tasks designed such that the aggregated feedback gives insight on how well these techniques aid the user to perceive depth and shape of objects. To further clarify the findings, we conducted a qualitative evaluation in which we interviewed three experienced visualization researchers, in order to find out if we can identify the benefits and shortcomings of the individual techniques.Direct Volume Rendering (DVR) provides the possibility to visualize volumetric data sets as they occur in many scientific disciplines. With DVR semi‐transparency is facilitated to convey the complexity of the data. Unfortunately, semi‐transparency introduces challenges in spatial comprehension of the data, as the ambiguities inherent to semi‐transparent representations affect spatial comprehension. Accordingly, many techniques have been introduced to enhance the spatial comprehension of DVR images. In this paper, we present our findings obtained from two evaluations investigating the perception of semi‐transparent structures from volume rendered images. We have conducted a user evaluation in which we have compared standard DVR with five techniques previously proposed to enhance the spatial comprehension of DVR images.Item Feature of Interest‐Based Direct Volume Rendering Using Contextual Saliency‐Driven Ray Profile Analysis(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Jung, Y.; Kim, J.; Kumar, A.; Feng, D.D.; Fulham, M.; Chen, Min and Benes, BedrichDirect volume rendering (DVR) visualization helps interpretation because it allows users to focus attention on the subset of volumetric data that is of most interest to them. The ideal visualization of the features of interest (FOIs) in a volume, however, is still a major challenge. The clear depiction of FOIs depends on accurate identification of the FOIs and appropriate specification of the optical parameters via transfer function (TF) design and it is typically a repetitive trial‐and‐error process. We address this challenge by introducing a new method that uses contextual saliency information to group the voxels along a viewing ray into distinct FOIs where ‘contextual saliency’ is a biologically inspired attribute that aids the identification of features that the human visual system considers important. The saliency information is also used to automatically define the optical parameters that emphasize the visual depiction of the FOIs in DVR. We demonstrate the capabilities of our method by its application to a variety of volumetric data sets and highlight its advantages by comparison to current state‐of‐the‐art ray profile analysis methods.Direct volume rendering (DVR) visualization helps interpretation because it allows users to focus attention on the subset of volumetric data that is of most interest to them. The ideal visualization of the features of interest (FOIs) in a volume, however, is still a major challenge. The clear depiction of FOIs depends on accurate identification of the FOIs and appropriate specification of the optical parameters via transfer function (TF) design and it is typically a repetitive trial‐and‐error process. We address this challenge by introducing a new method that uses contextual saliency information to group the voxels along a viewing ray into distinct FOIs where ‘contextual saliency’ is a biologically inspired attribute that aids the identification of features that the human visual system considers important. The saliency information is also used to automatically define the optical parameters that emphasize the visual depiction of the FOIs in DVR. We demonstrate the capabilities of our method by its application to a variety of volumetric data sets and highlight its advantages by comparison to current state‐of‐the‐art ray profile analysis methods.Item Part‐Based Mesh Segmentation: A Survey(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Rodrigues, Rui S. V.; Morgado, José F. M.; Gomes, Abel J. P.; Chen, Min and Benes, BedrichThis paper surveys mesh segmentation techniques and algorithms, with a focus on part‐based segmentation, that is, segmentation that divides a mesh (featuring a 3D object) into meaningful parts. Part‐based segmentation applies to a single object and also to a family of objects (i.e. co‐segmentation). However, we shall not address here chart‐based segmentation, though some mesh co‐segmentation methods employ such chart‐based segmentation in the initial step of their pipeline. Finally, the taxonomy proposed in this paper is new in the sense that one classifies each segmentation algorithm regarding the dimension (i.e. 1D, 2D and 3D) of the representation of object parts. The leading idea behind this survey is to identify the properties and limitations of the state‐of‐the‐art algorithms to shed light on the challenges for future work.This paper surveys mesh segmentation techniques and algorithms, with a focus on part‐based segmentation, that is, segmentation that divides a mesh (featuring a 3D object) into meaningful parts. Part‐based segmentation applies to a single object and also to a family of objects (i.e. co‐segmentation). However, we shall not address here chart‐based segmentation, though some mesh co‐segmentation methods employ such chart‐based segmentation in the initial step of their pipeline. Finally, the taxonomy proposed in this paper is new in the sense that one classifies each segmentation algorithm regarding the dimension (i.e. 1D, 2D and 3D) of the representation of object parts. The leading idea behind this survey is to identify the properties and limitations of the state‐of‐the‐art algorithms to shed light on the challenges for future work.Item An Implicit SPH Formulation for Incompressible Linearly Elastic Solids(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Peer, Andreas; Gissler, Christoph; Band, Stefan; Teschner, Matthias; Chen, Min and Benes, BedrichWe propose a novel smoothed particle hydrodynamics (SPH) formulation for deformable solids. Key aspects of our method are implicit elastic forces and an adapted SPH formulation for the deformation gradient that—in contrast to previous work—allows a rotation extraction directly from the SPH deformation gradient. The proposed implicit concept is entirely based on linear formulations. As a linear strain tensor is used, a rotation‐aware computation of the deformation gradient is required. In contrast to existing work, the respective rotation estimation is entirely realized within the SPH concept using a novel formulation with incorporated kernel gradient correction for first‐order consistency. The proposed implicit formulation and the adapted rotation estimation allow for significantly larger time steps and higher stiffness compared to explicit forms. Performance gain factors of up to one hundred are presented. Incompressibility of deformable solids is accounted for with an ISPH pressure solver. This further allows for a pressure‐based boundary handling and a unified processing of deformables interacting with SPH fluids and rigids. Self‐collisions are implicitly handled by the pressure solver.We propose a novel smoothed particle hydrodynamics (SPH) formulation for deformable solids. We propose a novel smoothed particle hydrodynamics (SPH) formulation for deformable solids. Key aspects of our method are implicit elastic forces and an adapted SPH formulation for the deformation gradient that—in contrast to previous work—allows a rotation extraction directly from the SPH deformation gradient. The proposed implicit concept is entirely based on linear formulations. As a linear strain tensor is used, a rotation‐aware computation of the deformation gradient is required. In contrast to existing work, the respective rotation estimation is entirely realized within the SPH concept using a novel formulation with incorporated kernel gradient correction for first‐order consistency.Item Visually Supporting Multiple Needle Placement in Irreversible Electroporation Interventions(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Kreiser, J.; Freedman, J.; Ropinski, T.; Chen, Min and Benes, BedrichIrreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted. We have evaluated our visualization together with surgeons to investigate the practical use for IRE liver ablations. A quantitative study shows the effectiveness compared to a single 3D view placement method.Irreversible electroporation (IRE) is a minimally invasive technique for small tumour ablation. Multiple needles are inserted around the planned treatment zone and, depending on the size, inside as well. An applied electric field triggers instant cell death around this zone. To ensure the correct application of IRE, certain criteria need to be fulfilled. The needles' placement in the tissue has to be parallel, at the same depth, and in a pattern which allows the electric field to effectively destroy the targeted lesions. As multiple needles need to synchronously fulfill these criteria, it is challenging for the surgeon to perform a successful IRE. Therefore, we propose a visualization which exploits intuitive visual coding to support the surgeon when conducting IREs. We consider two scenarios: first, to monitor IRE parameters while inserting needles during laparoscopic surgery; second, to validate IRE parameters in post‐placement scenarios using computed tomography. With the help of an easy to comprehend and lightweight visualization, surgeons are enabled to quickly visually detect what needs to be adjusted.Item Bidirectional Rendering of Vector Light Transport(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Jarabo, Adrian; Arellano, Victor; Chen, Min and Benes, BedrichOn the foundations of many rendering algorithms it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light propagation. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm. We generalize the path integral to support the constraints imposed by non‐symmetric light transport. Based on this theoretical framework, we propose modifications on two bidirectional methods, namely bidirectional path tracing and photon mapping, extending them to support polarization and fluorescence, in both steady and transient state. On the foundations of many rendering algorithms, it is the symmetry between the path traversed by light and its adjoint path starting from the camera. However, several effects, including polarization or fluorescence, break that symmetry, and are defined only on the direction of light. This reduces the applicability of bidirectional methods that exploit this symmetry for simulating effectively light transport. In this work, we focus on how to include these non‐symmetric effects within a bidirectional rendering algorithm.Item A Study of the Effect of Doughnut Chart Parameters on Proportion Estimation Accuracy(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Cai, X.; Efstathiou, K.; Xie, X.; Wu, Y.; Shi, Y.; Yu, L.; Chen, Min and Benes, BedrichPie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy. In our second experiment, we focused on the effect of the doughnut chart inner radius and we found that the proportion accuracy is insensitive to the inner radius, except the case of the thinnest doughnut chart. In the third experiment, we studied the effect of visual cues and found that marking the centre of the doughnut chart or adding tick marks at 25% intervals improves the proportion accuracy. Based on the results of the three experiments, we discuss the design of doughnut charts and offer suggestions for improving the accuracy of proportion estimates.Pie and doughnut charts nicely convey the part–whole relationship and they have become the most recognizable chart types for representing proportions in business and data statistics. Many experiments have been carried out to study human perception of the pie chart, while the corresponding aspects of the doughnut chart have seldom been tested, even though the doughnut chart and the pie chart share several similarities. In this paper, we report on a series of experiments in which we explored the effect of a few fundamental design parameters of doughnut charts, and additional visual cues, on the accuracy of such charts for proportion estimates. Since mobile devices are becoming the primary devices for casual reading, we performed all our experiments on such device. Moreover, the screen size of mobile devices is limited and it is therefore important to know how such size constraint affects the proportion accuracy. For this reason, in our first experiment we tested the chart size and we found that it has no significant effect on proportion accuracy.Item A Survey of Surface‐Based Illustrative Rendering for Visualization(© 2018 The Eurographics Association and John Wiley & Sons Ltd., 2018) Lawonn, Kai; Viola, Ivan; Preim, Bernhard; Isenberg, Tobias; Chen, Min and Benes, BedrichIn this paper, we survey illustrative rendering techniques for 3D surface models. We first discuss the field of illustrative visualization in general and provide a new definition for this sub‐area of visualization. For the remainder of the survey, we then focus on surface‐based models. We start by briefly summarizing the differential geometry fundamental to many approaches and discuss additional general requirements for the underlying models and the methods' implementations. We then provide an overview of low‐level illustrative rendering techniques including sparse lines, stippling and hatching, and illustrative shading, connecting each of them to practical examples of visualization applications. We also mention evaluation approaches and list various application fields, before we close with a discussion of the state of the art and future work.In this paper, we survey illustrative rendering techniques for 3D surface models. We first discuss the field of illustrative visualization in general and provide a new definition for this sub‐area of visualization. For the remainder of the survey, we then focus on surface‐based models. We start by briefly summarizing the differential geometry fundamental to many approaches and discuss additional general requirements for the underlying models and the methods' implementations. We then provide an overview of low‐level illustrative rendering techniques including sparse lines, stippling and hatching, and illustrative shading, connecting each of them to practical examples of visualization applications. We also mention evaluation approaches and list various application fields, before we close with a discussion of the state of the art and future work.