36-Issue 6
Permanent URI for this collection
Browse
Browsing 36-Issue 6 by Issue Date
Now showing 1 - 20 of 26
Results Per Page
Sort Options
Item Stress‐Constrained Thickness Optimization for Shell Object Fabrication(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhao, Haiming; Xu, Weiwei; Zhou, Kun; Yang, Yin; Jin, Xiaogang; Wu, Hongzhi; Chen, Min and Zhang, Hao (Richard)We present an approach to fabricate shell objects with thickness parameters, which are computed to maintain the user‐specified structural stability. Given a boundary surface and user‐specified external forces, we optimize the thickness parameters according to stress constraints to extrude the surface. Our approach mainly consists of two technical components: First, we develop a patch‐based shell simulation technique to efficiently support the static simulation of extruded shell objects using finite element methods. Second, we analytically compute the derivative of stress required in the sensitivity analysis technique to turn the optimization into a sequential linear programming problem. Experimental results demonstrate that our approach can optimize the thickness parameters for arbitrary surfaces in a few minutes and well predict the physical properties, such as the deformation and stress of the fabricated object.We present an approach to fabricate shell objects with thickness parameters, which are computed to maintain the user‐specified structural stability. Given a boundary surface and user‐specified external forces, we optimize the thickness parameters according to stress constraints to extrude the surface. Our approach mainly consists of two technical components: First, we develop a patch‐based shell simulation technique to efficiently support the static simulation of extruded shell objects using finite element methods. Second, we analytically compute the derivative of stress required in the sensitivity analysis technique to turn the optimization into a sequential linear programming problem.Item Muscle‐Based Control for Character Animation(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Cruz Ruiz, A.L.; Pontonnier, C.; Pronost, N.; Dumont, G.; Chen, Min and Zhang, Hao (Richard)Muscle‐based control is transforming the field of physics‐based character animation through the integration of knowledge from neuroscience, biomechanics and robotics, which enhance motion realism. Since any physics‐based animation system can be extended to a muscle‐actuated system, the possibilities of growth are tremendous. However, modelling muscles and their control remains a difficult challenge. We present an organized review of over a decade of research in muscle‐based control for character animation, its fundamental concepts and future directions for development. The core of this review contains a classification of control methods, tables summarizing their key aspects and popular neuromuscular functions used within these controllers, all with the purpose of providing the reader with an overview of the field.Muscle‐based control is transforming the field of physics‐based character animation through the integration of knowledge from neuroscience, biomechanics and robotics, which enhance motion realism. Since any physics‐based animation system can be extended to a muscle‐actuated system, the possibilities of growth are tremendous. However, modelling muscles and their control remains a difficult challenge. We present an organized review of over a decade of research in muscle‐based control for character animation, its fundamental concepts and future directions for development.Item Constrained Modelling of 3‐Valent Meshes Using a Hyperbolic Deformation Metric(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Richter, Ronald; Kyprianidis, Jan Eric; Springborn, Boris; Alexa, Marc; Chen, Min and Zhang, Hao (Richard)Polygon meshes with 3‐valent vertices often occur as the frame of free‐form surfaces in architecture, in which rigid beams are connected in rigid joints. For modelling such meshes, it is desirable to measure the deformation of the joints' shapes. We show that it is natural to represent joint shapes as points in hyperbolic 3‐space. This endows the space of joint shapes with a geometric structure that facilitates computation. We use this structure to optimize meshes towards different constraints, and we believe that it will be useful for other applications as well.Polygon meshes with 3‐valent vertices often occur as the frame of free‐form surfaces in architecture, in which rigid beams are connected in rigid joints.Item Interactive Lenses for Visualization: An Extended Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Tominski, C.; Gladisch, S.; Kister, U.; Dachselt, R.; Schumann, H.; Chen, Min and Zhang, Hao (Richard)The elegance of using virtual interactive lenses to provide alternative visual representations for selected regions of interest is highly valued, especially in the realm of visualization. Today, more than 50 lens techniques are known in the closer context of visualization, far more in related fields. In this paper, we extend our previous survey on interactive lenses for visualization. We propose a definition and a conceptual model of lenses as extensions of the classic visualization pipeline. An extensive review of the literature covers lens techniques for different types of data and different user tasks and also includes the technologies employed to display lenses and to interact with them. We introduce a taxonomy of lenses for visualization and illustrate its utility by dissecting in detail a multi‐touch lens for exploring large graph layouts. As a conclusion of our review, we identify challenges and unsolved problems to be addressed in future research.The elegance of using virtual interactive lenses to provide alternative visual representations for selected regions of interest is highly valued, especially in the realm of visualization. Today, more than 50 lens techniques are known in the closer context of visualization, far more in related fields. In this paper, we extend our previous survey on interactive lenses for visualization. We propose a definition and a conceptual model of lenses as extensions of the classic visualization pipeline. An extensive review of the literature covers lens techniques for different types of data and different user tasks and also includes the technologies employed to display lenses and to interact with them.Item 4D Reconstruction of Blooming Flowers(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zheng, Qian; Fan, Xiaochen; Gong, Minglun; Sharf, Andrei; Deussen, Oliver; Huang, Hui; Chen, Min and Zhang, Hao (Richard)Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species. In addition, we provide comparisons with state‐of‐the‐art physical simulation‐based approaches and evaluate our approach by using photos of captured real flowers.Flower blooming is a beautiful phenomenon in nature as flowers open in an intricate and complex manner whereas petals bend, stretch and twist under various deformations. Flower petals are typically thin structures arranged in tight configurations with heavy self‐occlusions. Thus, capturing and reconstructing spatially and temporally coherent sequences of blooming flowers is highly challenging. Early in the process only exterior petals are visible and thus interior parts will be completely missing in the captured data. Utilizing commercially available 3D scanners, we capture the visible parts of blooming flowers into a sequence of 3D point clouds. We reconstruct the flower geometry and deformation over time using a template‐based dynamic tracking algorithm. To track and model interior petals hidden in early stages of the blooming process, we employ an adaptively constrained optimization. Flower characteristics are exploited to track petals both forward and backward in time. Our methods allow us to faithfully reconstruct the flower blooming process of different species.Item Inverse Modelling of Incompressible Gas Flow in Subspace(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhai, Xiao; Hou, Fei; Qin, Hong; Hao, Aimin; Chen, Min and Zhang, Hao (Richard)This paper advocates a novel method for modelling physically realistic flow from captured incompressible gas sequence via modal analysis in frequency‐constrained subspace. Our analytical tool is uniquely founded upon empirical mode decomposition (EMD) and modal reduction for fluids, which are seamlessly integrated towards a powerful, style‐controllable flow modelling approach. We first extend EMD, which is capable of processing 1D time series but has shown inadequacies for 3D graphics earlier, to fit gas flows in 3D. Next, frequency components from EMD are adopted as candidate vectors for bases of modal reduction. The prerequisite parameters of the Navier–Stokes equations are then optimized to inversely model the physically realistic flow in the frequency‐constrained subspace. The estimated parameters can be utilized for re‐simulation, or be altered toward fluid editing. Our novel inverse‐modelling technique produces real‐time gas sequences after precomputation, and is convenient to couple with other methods for visual enhancement and/or special visual effects. We integrate our new modelling tool with a state‐of‐the‐art fluid capturing approach, forming a complete pipeline from real‐world fluid to flow re‐simulation and editing for various graphics applications.This paper advocates a novel method for modelling physically realistic flow from captured incompressible gas sequence via modal analysis in frequency‐constrained subspace. Our analytical tool is uniquely founded upon empirical mode decomposition (EMD) and modal reduction for fluids, which are seamlessly integrated towards a powerful, style‐controllable flow modelling approach.Item A Survey of Cardiac 4D PC‐MRI Data Processing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Köhler, Benjamin; Born, Silvia; van Pelt, Roy F. P.; Hennemuth, Anja; Preim, Uta; Preim, Bernhard; Chen, Min and Zhang, Hao (Richard)Cardiac four‐dimensional phase‐contrast magnetic resonance imaging (4D PC‐MRI) acquisitions have gained increasing clinical interest in recent years. They allow to non‐invasively obtain extensive information about patient‐specific hemodynamics, and thus have a great potential to improve the diagnosis, prognosis and therapy planning of cardiovascular diseases. A dataset contains time‐resolved, three‐dimensional blood flow directions and strengths, making comprehensive qualitative and quantitative data analysis possible. Quantitative measures, such as stroke volumes, help to assess the cardiac function and to monitor disease progression. Qualitative analysis allows to investigate abnormal flow characteristics, such as vortices, which are correlated to different pathologies. Processing the data comprises complex image processing methods, as well as flow analysis and visualization. In this work, we mainly focus on the aorta. We provide an overview of data measurement and pre‐processing, as well as current visualization and quantification methods. This allows other researchers to quickly catch up with the topic and take on new challenges to further investigate the potential of 4D PC‐MRI data.Cardiac 4D PC‐MRI acquisitions have gained increasing clinical interest in recent years.Item Issue Information(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Chen, Min and Zhang, Hao (Richard)Item Scalable Feature‐Preserving Irregular Mesh Coding(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) El Sayeh Khalil, J.; Munteanu, A.; Denis, L.; Lambert, P.; Walle, R.; Chen, Min and Zhang, Hao (Richard)This paper presents a novel wavelet‐based transform and coding scheme for irregular meshes. The transform preserves geometric features at lower resolutions by adaptive vertex sampling and retriangulation, resulting in more accurate subsampling and better avoidance of smoothing and aliasing artefacts. By employing octree‐based coding techniques, the encoding of both connectivity and geometry information is decoupled from any mesh traversal order, and allows for exploiting the intra‐band statistical dependencies between wavelet coefficients. Improvements over the state of the art obtained by our approach are three‐fold: (1) improved rate–distortion performance over Wavemesh and IPR for both the Hausdorff and root mean square distances at low‐to‐mid‐range bitrates, most obvious when clear geometric features are present while remaining competitive for smooth, feature‐poor models; (2) improved rendering performance at any triangle budget, translating to a better quality for the same runtime memory footprint; (3) improved visual quality when applying similar limits to the bitrate or triangle budget, showing more pronounced improvements than rate–distortion curves.This paper presents a novel wavelet‐based transform and coding scheme for irregular meshes. The transform preserves geometric features at lower resolutions by adaptive vertex sampling and retriangulation, resulting in more accurate subsampling and better avoidance of smoothing and aliasing artefacts. By employing octree‐based coding techniques, the encoding of both connectivity and geometry information is decoupled from any mesh traversal order, and allows for exploiting the intra‐band statistical dependencies between wavelet coefficients.Item Intrinsic Image Decomposition Using Multi‐Scale Measurements and Sparsity(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Ding, Shouhong; Sheng, Bin; Hou, Xiaonan; Xie, Zhifeng; Ma, Lizhuang; Chen, Min and Zhang, Hao (Richard)Automatic decomposition of intrinsic images, especially for complex real‐world images, is a challenging under‐constrained problem. Thus, we propose a new algorithm that generates and combines multi‐scale properties of chromaticity differences and intensity contrast. The key observation is that the estimation of image reflectance, which is neither a pixel‐based nor a region‐based property, can be improved by using multi‐scale measurements of image content. The new algorithm iteratively coarsens a graph reflecting the reflectance similarity between neighbouring pixels. Then multi‐scale reflectance properties are aggregated so that the graph reflects the reflectance property at different scales. This is followed by a sparse regularization on the whole reflectance image, which enforces the variation in reflectance images to be high‐frequency and sparse. We formulate this problem through energy minimization which can be solved efficiently within a few iterations. The effectiveness of the new algorithm is tested with the Massachusetts Institute of Technology (MIT) dataset, the Intrinsic Images in the Wild (IIW) dataset, and various natural images.Automatic decomposition of intrinsic images, especially for complex real‐world images, is a challenging under‐constrained problem. Thus, we propose a new algorithm that generates and combines multi‐scale properties of chromaticity differences and intensity contrast. The key observation is that the estimation of image reflectance, which is neither a pixel‐based nor a region‐based property, can be improved by using multi‐scale measurements of image content. The new algorithm iteratively coarsens a graph reflecting the reflectance similarity between neighbouring pixels.Item A Phase‐Based Approach for Animating Images Using Video Examples(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Prashnani, Ekta; Noorkami, Maneli; Vaquero, Daniel; Sen, Pradeep; Chen, Min and Zhang, Hao (Richard)We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video. These phase changes are then transferred to the corresponding spatial sub‐bands of the input image to animate it. We demonstrate that this simple, phase‐based approach for transferring small motion is more effective at animating still images than methods which rely on optical flow.We present a novel approach for animating static images that contain objects that move in a subtle, stochastic fashion (e.g. rippling water, swaying trees, or flickering candles). To do this, our algorithm leverages example videos of similar objects, supplied by the user. Unlike previous approaches which estimate motion fields in the example video to transfer motion into the image, a process which is brittle and produces artefacts, we propose an Eulerian approach which uses the phase information from the sample video to animate the static image. As is well known, phase variations in a signal relate naturally to the displacement of the signal via the Fourier Shift Theorem. To enable local and spatially varying motion analysis, we analyse phase changes in a complex steerable pyramid of the example video.Item A Descriptive Framework for Temporal Data Visualizations Based on Generalized Space‐Time Cubes(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Bach, B.; Dragicevic, P.; Archambault, D.; Hurter, C.; Carpendale, S.; Chen, Min and Zhang, Hao (Richard)We present the , a descriptive model for visualizations of temporal data. Visualizations are described as operations on the cube, which transform the cube's 3D shape into readable 2D visualizations. Operations include extracting subparts of the cube, flattening it across space or time or transforming the cubes geometry and content. We introduce a taxonomy of elementary space‐time cube operations and explain how these operations can be combined and parameterized. The generalized space‐time cube has two properties: (1) it is purely conceptual without the need to be implemented, and (2) it applies to all datasets that can be represented in two dimensions plus time (e.g. geo‐spatial, videos, networks, multivariate data). The proper choice of space‐time cube operations depends on many factors, for example, density or sparsity of a cube. Hence, we propose a characterization of structures within space‐time cubes, which allows us to discuss strengths and limitations of operations. We finally review interactive systems that support multiple operations, allowing a user to customize his view on the data. With this framework, we hope to facilitate the description, criticism and comparison of temporal data visualizations, as well as encourage the exploration of new techniques and systems. This paper is an extension of Bach .'s (2014) work.We present the , a descriptive model for visualizations of temporal data. Visualizations are described as operations on the cube, which transform the cube's 3D shape into readable 2D visualizations. Operations include extracting subparts of the cube, flattening it across space or time or transforming the cubes geometry and content. We introduce a taxonomy of elementary space‐time cube operations and explain how these operations can be combined and parameterized. The generalized space‐time cube has two properties: (1) it is purely conceptual without the need to be implemented, and (2) it applies to all datasets that can be represented in two dimensions plus time (e.g. geo‐spatial, videos, networks, multivariate data). The proper choice of space‐time cube operations depends on many factors, for example, density or sparsity of a cube.Item Adaptive Physically Based Models in Computer Graphics(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Manteaux, P.‐L.; Wojtan, C.; Narain, R.; Redon, S.; Faure, F.; Cani, M.‐P.; Chen, Min and Zhang, Hao (Richard)One of the major challenges in physically based modelling is making simulations efficient. Adaptive models provide an essential solution to these efficiency goals. These models are able to self‐adapt in space and time, attempting to provide the best possible compromise between accuracy and speed. This survey reviews the adaptive solutions proposed so far in computer graphics. Models are classified according to the strategy they use for adaptation, from time‐stepping and freezing techniques to geometric adaptivity in the form of structured grids, meshes and particles. Applications range from fluids, through deformable bodies, to articulated solids.One of the major challenges in physically based modelling is making simulations efficient. Adaptive models provide an essential solution to these efficiency goals. These models are able to self‐adapt in space and time, attempting to provide the best possible compromise between accuracy and speed. This survey reviews the adaptive solutions proposed so far in computer graphics. Models are classified according to the strategy they use for adaptation, from time‐stepping and freezing techniques to geometric adaptivity in the form of structured grids, meshes and particles.Item Visual Text Analysis in Digital Humanities(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Jänicke, S.; Franzini, G.; Cheema, M. F.; Scheuermann, G.; Chen, Min and Zhang, Hao (Richard)In 2005, Franco Moretti introduced Distant Reading to analyse entire literary text collections. This was a rather revolutionary idea compared to the traditional Close Reading, which focuses on the thorough interpretation of an individual work. Both reading techniques are the prior means of Visual Text Analysis. We present an overview of the research conducted since 2005 on supporting text analysis tasks with close and distant reading visualizations in the digital humanities. Therefore, we classify the observed papers according to a taxonomy of text analysis tasks, categorize applied close and distant reading techniques to support the investigation of these tasks and illustrate approaches that combine both reading techniques in order to provide a multi‐faceted view of the textual data. In addition, we take a look at the used text sources and at the typical data transformation steps required for the proposed visualizations. Finally, we summarize collaboration experiences when developing visualizations for close and distant reading, and we give an outlook on future challenges in that research area.In 2005, Franco Moretti introduced Distant Reading to analyse entire literary text collections. This was a rather revolutionary idea compared to the traditional Close Reading, which focuses on the thorough interpretation of an individual work. Both reading techniques are the prior means of Visual Text Analysis. We present an overview of the research conducted since 2005 on supporting text analysis tasks with close and distant reading visualizations in the digital humanities.Item Spectral Processing of Tangential Vector Fields(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Brandt, Christopher; Scandolo, Leonardo; Eisemann, Elmar; Hildebrandt, Klaus; Chen, Min and Zhang, Hao (Richard)We propose a framework for the spectral processing of tangential vector fields on surfaces. The basis is a Fourier‐type representation of tangential vector fields that associates frequencies with tangential vector fields. To implement the representation for piecewise constant tangential vector fields on triangle meshes, we introduce a discrete Hodge–Laplace operator that fits conceptually to the prominent discretization of the Laplace–Beltrami operator. Based on the Fourier representation, we introduce schemes for spectral analysis, filtering and compression of tangential vector fields. Moreover, we introduce a spline‐type editor for modelling of tangential vector fields with interpolation constraints for the field itself and its divergence and curl. Using the spectral representation, we propose a numerical scheme that allows for real‐time modelling of tangential vector fields.We propose a framework for the spectral processing of tangential vector fields on surfaces. The basis is a Fourier‐type representation of tangential vector fields that associates frequencies with tangential vector fields. To implement the representation for piecewise constant tangential vector fields on triangle meshes, we introduce a discrete Hodge–Laplace operator that fits conceptually to the prominent discretization of the Laplace–Beltrami operator. Based on the Fourier representation, we introduce schemes for spectral analysis, filtering and compression of tangential vector fields.Item Time‐Continuous Quasi‐Monte Carlo Ray Tracing(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Gribel, C.J.; Akenine‐Möller, T.; Chen, Min and Zhang, Hao (Richard)Domain‐continuous visibility determination algorithms have proved to be very efficient at reducing noise otherwise prevalent in stochastic sampling. Even though they come with an increased overhead in terms of geometrical tests and visibility information management, their analytical nature provides such a rich integral that the pay‐off is often worth it. This paper presents a time‐continuous, primary visibility algorithm for motion blur aimed at ray tracing. Two novel intersection tests are derived and implemented. The first is for ray versus moving triangle and the second for ray versus moving AABB intersection. A novel take on shading is presented as well, where the time continuum of visible geometry is adaptively point‐sampled. Static geometry is handled using supplemental stochastic rays in order to reduce spatial aliasing. Finally, a prototype ray tracer with a full time‐continuous traversal kernel is presented in detail. The results are based on a variety of test scenarios and show that even though our time‐continuous algorithm has limitations, it outperforms multi‐jittered quasi‐Monte Carlo ray tracing in terms of image quality at equal rendering time, within wide sampling rate ranges. Domain‐continuous visibility determination algorithms have proved to be very efficient at reducing noise otherwise prevalent in stochastic sampling. Even though they come with an increased overhead in terms of geometrical tests and visibility information management, their analytical nature provides such a rich integral that the pay‐off is often worth it. This paper presents a time‐continuous, primary visibility algorithm for motion blur aimed at ray tracing.Item Visualizing Group Structures in Graphs: A Survey(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Vehlow, Corinna; Beck, Fabian; Weiskopf, Daniel; Chen, Min and Zhang, Hao (Richard)Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation. The derived taxonomies for group structure and visualization types are also applied to group visualizations of edges. We survey group‐only, group–node, group–edge and group–network tasks that are described in the literature as use cases of group visualizations. We discuss results from evaluations of existing visualization techniques as well as main areas of application. Finally, we report future challenges based on interviews we conducted with leading researchers of the field.Graph visualizations encode relationships between objects. Abstracting the objects into group structures provides an overview of the data. Groups can be disjoint or overlapping, and might be organized hierarchically. However, the underlying graph still needs to be represented for analyzing the data in more depth. This work surveys research in visualizing group structures as part of graph diagrams. A particular focus is the explicit visual encoding of groups, rather than only using graph layout to indicate groups implicitly. We introduce a taxonomy of visualization techniques structuring the field into four main categories: visual node attributes vary properties of the node representation to encode the grouping, juxtaposed approaches use two separate visualizations, superimposed techniques work with two aligned visual layers, and embedded visualizations tightly integrate group and graph representation.Item Reevaluating Reconstruction Filters for Path‐Searching Tasks in 3D(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Roberts, D. A. T.; Ivrissimtzis, I.; Chen, Min and Zhang, Hao (Richard)In this paper, we present an experiment on stereoscopic direct volume rendering, aiming at understanding the relationship between the choice of reconstruction filter and participant performance on tasks requiring spatial understanding such as 3D path‐searching. The focus of our study is on the impact on task performance of the post‐aliasing and smoothing produced by the reconstruction filters. We evaluated five reconstruction filters, each under two different transfer functions and two different displays with a wide range of behaviours in terms of post‐aliasing and smoothing. We found that path‐searching tasks commonly found in the literature, and as the one we employed here, elicit bias in the responses which should be taken into account when analysing the results. Our analysis, which employed both standard statistical tests and techniques from signal detection theory, indicates that the choice of reconstruction filter affects some aspects of the spatial understanding of the scene.In this paper, we present an experiment on stereoscopic direct volume rendering, aiming at understanding the relationship between the choice of reconstruction filter and participant performance on tasks requiring spatial understanding such as 3D path‐searching. The focus of our study is on the impact on task performance of the post‐aliasing and smoothing produced by the reconstruction filters. We evaluated five reconstruction filters, each under two different transfer functions and two different displays with a wide range of behaviours in terms of post‐aliasing and smoothing produced by the reconstruction filters.Item Visual Quantification of the Circle of Willis: An Automated Identification and Standardized Representation(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Miao, H.; Mistelbauer, G.; Našel, C.; Gröller, M. E.; Chen, Min and Zhang, Hao (Richard)This paper presents a method for the visual quantification of cerebral arteries, known as the Circle of Willis (CoW). It is an arterial structure with the responsibility of supplying the brain with blood, however, dysfunctions can lead to strokes. The diagnosis of such a time‐critical/urgent event depends on the expertise of radiologists and the applied software tools. They use basic display methods of the volumetric data without any support of advanced image processing and visualization techniques. The goal of this paper is to present an automated method for the standardized description of cerebral arteries in stroke patients in order to provide an overview of the CoW's configuration. This novel representation provides visual indications of problematic areas as well as straightforward comparisons between multiple patients. Additionally, we offer a pipeline for extracting the CoW from Time‐of‐Flight Magnetic Resonance Angiography (TOF‐MRA) data sets together with an enumeration technique for labelling the arterial segments by detecting the main supplying arteries of the CoW. We evaluated the feasibility of our visual quantification approach in a study of 63 TOF‐MRA data sets and compared our findings to those of three radiologists. The obtained results demonstrate that our proposed techniques are effective in detecting the arteries and visually capturing the overall configuration of the CoW.This paper presents a method for the visual quantification of cerebral arteries, known as the Circle of Willis (CoW). It is an arterial structure with the responsibility of supplying the brain with blood, however, dysfunctions can lead to strokes. The diagnosis of such a time‐critical/urgent event depends on the expertise of radiologists and the applied software tools. They use basic display methods of the volumetric data without any support of advanced image processing and visualization techniques. The goal of this paper is to present an automated method for the standardized description of cerebral arteries in stroke patients in order to provide an overview of the CoW's configuration. This novel representation provides visual indications of problematic areas as well as straightforward comparisons between multiple patients.Item Dynamically Enriched MPM for Invertible Elasticity(© 2017 The Eurographics Association and John Wiley & Sons Ltd., 2017) Zhu, Fei; Zhao, Jing; Li, Sheng; Tang, Yong; Wang, Guoping; Chen, Min and Zhang, Hao (Richard)We extend the material point method (MPM) for robust simulation of extremely large elastic deformation. This facilitates the application of MPM towards a unified solver since its versatility has been demonstrated lately with simulation of varied materials. Extending MPM for invertible elasticity requires accounting for several of its inherent limitations. MPM as a meshless method exhibits numerical fracture in large tensile deformations. We eliminate it by augmenting particles with connected material domains. Besides, constant redefinition of the interpolating functions between particles and grid introduces accumulated error which behaves like artificial plasticity. We address this problem by utilizing the Lagrangian particle domains as enriched degrees of freedom for simulation. The enrichment is applied dynamically during simulation via an error metric based on local deformation of particles. Lastly, we novelly reformulate the computation in reference configuration and investigate inversion handling techniques to ensure the robustness of our method in regime of degenerated configurations. The power and robustness of our method are demonstrated with various simulations that involve extreme deformations. We extend the material point method (MPM) for robust simulation of extremely large elastic deformation. This facilitates the application ofMPMtowards a unified solver since its versatility has been demonstrated lately with simulation of variedmaterials. Extending MPM for invertible elasticity requires accounting for several of its inherent limitations. MPM as a meshless method exhibits numerical fracture in large tensile deformations. We eliminate it by augmenting particles with connected material domains. Besides, constant redefinition of the interpolating functions between particles and grid introduces accumulated error which behaves like artificial plasticity. We address this problem by utilizing the Lagrangian particle domains as enriched degrees of freedom for simulation. We also novelly reformulate the computation in reference configuration and investigate inversion handling techniques to ensure the robustness of our method in regime of degenerated configurations