33-Issue 1
Permanent URI for this collection
Browse
Browsing 33-Issue 1 by Issue Date
Now showing 1 - 20 of 24
Results Per Page
Sort Options
Item Time Line Cell Tracking for the Approximation of Lagrangian Coherent Structures with Subgrid Accuracy(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kuhn, A.; Engelke, W.; Rössl, C.; Hadwiger, M.; Theisel, H.; Holly Rushmeier and Oliver DeussenLagrangian coherent structures (LCSs) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the finite-time Lyapunov exponent field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieve subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. In comparison to the standard method, this allows for a more accurate global approximation of LCS on sparse grids and for long integration intervals. The proposed algorithm works directly on a set of given particle trajectories and without additional flow map derivatives. We demonstrate its application for a set of computational fluid dynamic examples, as well as trajectories acquired by Lagrangian methods, and discuss its benefits and limitations.Lagrangian Coherent Structures (LCS) have become a widespread and powerful method to describe dynamic motion patterns in time-dependent flow fields. The standard way to extract LCS is to compute height ridges in the Finite Time Lyapunov Exponent (FTLE) field. In this work, we present an alternative method to approximate Lagrangian features for 2D unsteady flow fields that achieves subgrid accuracy without additional particle sampling. We obtain this by a geometric reconstruction of the flow map using additional material constraints for the available samples. The illustration shows four approximations of LCS at different time steps in subgrid accuracy computed from a triangular grid containing 60 times 120 sample points for a heated cylinder simulation.Item On Perception of Semi‐Transparent Streamlines for Three‐Dimensional Flow Visualization(The Eurographics Association and John Wiley and Sons Ltd., 2014) Mishchenko, O.; Crawfis, R.; Holly Rushmeier and Oliver DeussenOne of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.One of the standard techniques to visualize three‐dimensional flow is to use geometry primitives. This solution, when opaque primitives are used, results in high levels of occlusion, especially with dense streamline seeding. Using semi‐transparent geometry primitives can alleviate the problem of occlusion. However, with semi‐transparency some parts of the data set become too vague and blurry, while others are still heavily occluded. We conducted a user study that provided us with results on perceptual limits of using semi‐transparent geometry primitives for flow visualization. Texture models for semi‐transparent streamlines were introduced. Test subjects were shown multiple overlaying layers of streamlines and recorded how many different flow directions they were able to perceive. The user study allowed us to identify a set of top scoring textures. We discuss the results of the user study, provide guidelines on using semi‐transparency for three‐dimensional flow visualization and show how varying textures for different streamlines can further enhance the perception of dense streamlines. We also discuss the strategies for dealing with very high levels of occlusion. The strategies are per‐pixel filtering of flow directions, when only some of the streamlines are rendered at a particular pixel, and opacity normalization, a way of altering the opacity of overlapping streamlines with the same direction. We illustrate our results with a variety of visualizations.Item Visualization of the Centre of Projection Geometrical Locus in a Single Image(The Eurographics Association and John Wiley and Sons Ltd., 2014) Stojaković, V.; Popov, S.; Tepavčević, B.; Holly Rushmeier and Oliver DeussenSingle view reconstruction (SVR) is an important approach for 3D shape recovery since many non‐existing buildings and scenes are captured in a single image. Historical photographs are often the most precise source for virtual reconstruction of a damaged cultural heritage. In semi‐automated techniques, that are mainly used under practical situations, the user is the one who recognizes and selects constraints to be used. Hence, the veridicality and the accuracy of the final model partially rely on man‐based decisions. We noticed that users, especially non‐expert users such as cultural heritage professionals, usually do not fully understand the SVR process, which is why they have trouble in decision making while modelling. That often fundamentally affects the quality of the final 3D models. Considering the importance of human performance in SVR approaches, in this paper we offer a solution that can be used to reduce the amount of user errors. Specifically, we address the problem of locating the centre of projection (CP). We introduce a tool set for 3D visualization of the CP's geometrical loci that provides the user with a clear idea of how the CP's location is determined. Thanks to this type of visualization, the user becomes aware of the following: (1) the constraint relevant for CP location, (2) the image suitable for SVR, (3) more constraints for CP location required, (4) which constraints should be used for the best match, (5) will additional constraints create a useful redundancy. In order to test our approach and the assumptions it relies on, we compared the amount of user made errors in the standard approaches with the one in which additional visualization is provided.Users usually do not fully understand the SVR process, which is why they have trouble in decision making while modelling. That often fundamentally affects the quality of the final 3D models. We introduce a tool set for 3D visualisation of the CP's geometrical loci that provides the user with a clear idea of how the CP's location is determined. Evaluation proves that Tool set provides an effective improvement.Item On Near Optimal Lattice Quantization of Multi‐Dimensional Data Points(The Eurographics Association and John Wiley and Sons Ltd., 2014) Finckh, M.; Dammertz, H.; Lensch, H. P. A.; Holly Rushmeier and Oliver DeussenOne of the most elementary application of a lattice is the quantization of real‐valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector. Further, the number of points inside the unit cube [0, 1)s is arbitrary and can be directly enumerated by a single one‐dimensional integer value. By choosing a suitable generator vector the minimum distance between the lattice points can be maximized which, as we show, leads to a nearly optimal mean quantization error. We present methods for finding parameters for s‐dimensional maximized minimum distance rank‐1 lattices and further show their practical use in computer graphics applications.One of the most elementary application of a lattice is the quantization of real valued s‐dimensional vectors into finite bit precision to make them representable by a digital computer. Most often, the simple s‐dimensional regular grid is used for this task where each component of the vector is quantized individually. However, it is known that other lattices perform better regarding the average quantization error. A rank‐1 lattices is a special type of lattice, where the lattice points can be described by a single s‐dimensional generator vector.Item Interactive Simulation of Rigid Body Dynamics in Computer Graphics(The Eurographics Association and John Wiley and Sons Ltd., 2014) Bender, Jan; Erleben, Kenny; Trinkle, Jeff; Holly Rushmeier and Oliver DeussenInteractive rigid body simulation is an important part of many modern computer tools, which no authoring tool nor game engine can do without. Such high-performance computer tools open up new possibilities for changing how designers, engineers, modelers and animators work with their design problems. This paper is a self contained state-of-the-art report on the physics, the models, the numerical methods and the algorithms used in interactive rigid body simulation all of which have evolved and matured over the past 20 years. Furthermore, the paper communicates the mathematical and theoretical details in a pedagogical manner. This paper is not only a stake in the sand on what has been done, it also seeks to give the reader deeper insights to help guide their future research.Item Low‐Cost Subpixel Rendering for Diverse Displays(The Eurographics Association and John Wiley and Sons Ltd., 2014) Engelhardt, Thomas; Schmidt, Thorsten‐Walther; Kautz, Jan; Dachsbacher, Carsten; Holly Rushmeier and Oliver DeussenSubpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, we find optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns. We demonstrate that these optimal filters can be approximated well with analytical functions. We incorporate our filters into GPU‐based multi‐sample anti‐aliasing to yield subpixel rendering at a very low cost (1–2 ms filtering time at HD resolution). We also show that texture filtering can be adapted to perform efficient subpixel rendering. Finally, we analyse the findings of a user study we performed, which underpins the increased visual fidelity that can be achieved for diverse display layouts, by using our optimal filters.Subpixel rendering increases the apparent display resolution by taking into account the subpixel structure of a given display. In essence, each subpixel is addressed individually, allowing the underlying signal to be sampled more densely. Unfortunately, naïve subpixel sampling introduces colour aliasing, as each subpixel only displays a specific colour (usually R, G, and B subpixels are used). As previous work has shown, chromatic aliasing can be reduced significantly by taking the sensitivity of the human visual system into account. In this work, wefind optimal filters for subpixel rendering for a diverse set of 1D and 2D subpixel layout patterns.Item Image Space Rendering of Point Clouds Using the HPR Operator(The Eurographics Association and John Wiley and Sons Ltd., 2014) Silva, R. Machado e; Esperança, C.; Marroquim, R.; Oliveira, A. A. F.; Holly Rushmeier and Oliver DeussenThe hidden point removal (HPR) operator introduced by Katz et al. [KTB07] provides an elegant solution for the problem of estimating the visibility of points in point samplings of surfaces. Since the method requires computing the three‐dimensional convex hull of a set with the same cardinality as the original cloud, the method has been largely viewed as impractical for real‐time rendering of medium to large clouds. In this paper we examine how the HPR operator can be used more efficiently by combining several image space techniques, including an approximate convex hull algorithm, cloud sampling, and GPU programming. Experiments show that this combination permits faster renderings without overly compromising the accuracy.The hidden point removal (HPR) operator introduced by Katz et al. [KTB07] provides an elegant solution for the problem of estimating the visibility of points in point samplings of surfaces. Since the method requires computing the three‐dimensional convex hull of a set with the same cardinality as the original cloud, the method has been largely viewed as impractical for real‐time rendering of medium to large clouds. In this paper we examine how the HPR operator can be used more efficiently by combining several image space techniques, including an approximate convex hull algorithm, cloud sampling, and GPU programming. Experiments show that this combination permits faster renderings without overly compromising the accuracy.Item Implicit Decals: Interactive Editing of Repetitive Patterns on Surfaces(The Eurographics Association and John Wiley and Sons Ltd., 2014) Groot, Erwin; Wyvill, Brian; Barthe, Loïc; Nasri, Ahmad; Lalonde, Paul; Holly Rushmeier and Oliver DeussenTexture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper, we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals. A spherical field function is used to determine the position and the size of each decal and the deformation applied to fit the decals. The decals may span multiple objects with heterogeneous representations. Our method does not require an explicit parametrization of the model. As such, varieties of patterns, including repeated patterns like rocks, tiles and scales can be mapped. We have implemented the method using the GPU where placement, size and orientation of thousands of decals are manipulated in real time.Texture mapping is an essential component for creating 3D models and is widely used in both the game and the movie industries. Creating texture maps has always been a complex task and existing methods carefully balance flexibility with ease of use. One difficulty in using texturing is the repeated placement of individual textures over larger areas. In this paper we propose a method which uses decals to place images onto a model. Our method allows the decals to compete for space and to deform as they are being pushed by other decals.Item Multi‐Scale Kernels Using Random Walks(The Eurographics Association and John Wiley and Sons Ltd., 2014) Sinha, A.; Ramani, K.; Holly Rushmeier and Oliver DeussenWe introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable. We introduce dual Green's mean signatures based on the kernels and discuss the applicability of the multi‐scale distance and embedding. Collectively, we present a unified view of popular embeddings and distance metrics while recovering intuitive probabilistic interpretations on discrete surface meshes.We introduce novel multi‐scale kernels using the random walk framework and derive corresponding embeddings and pairwise distances. The fractional moments of the rate of continuous time random walk (equivalently diffusion rate) are used to discover higher order kernels (or similarities) between pair of points. The formulated kernels are isometry, scale and tessellation invariant, can be made globally or locally shape aware and are insensitive to partial objects and noise based on the moment and influence parameters. In addition, the corresponding kernel distances and embeddings are convergent and efficiently computable.Item A Survey of Volumetric Illumination Techniques for Interactive Volume Rendering(The Eurographics Association and John Wiley and Sons Ltd., 2014) Jönsson, Daniel; Sundén, Erik; Ynnerman, Anders; Ropinski, Timo; Holly Rushmeier and Oliver DeussenInteractive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behaviour as well as their perceptual capabilities. Based on the limitations revealed in this review, we will define future challenges in the area of interactive advanced volumetric illumination.Interactive volume rendering in its standard formulation has become an increasingly important tool in many application domains. In recent years several advanced volumetric illumination techniques to be used in interactive scenarios have been proposed. These techniques claim to have perceptual benefits as well as being capable of producing more realistic volume rendered images. Naturally, they cover a wide spectrum of illumination effects, including varying shading and scattering effects. In this survey, we review and classify the existing techniques for advanced volumetric illumination. The classification will be conducted based on their technical realization, their performance behavior as well as their perceptual capabilities.Item Visibility Silhouettes for Semi‐Analytic Spherical Integration(The Eurographics Association and John Wiley and Sons Ltd., 2014) Nowrouzezahrai, Derek; Baran, Ilya; Mitchell, Kenny; Jarosz, Wojciech; Holly Rushmeier and Oliver DeussenAt each shade point, the spherical visibility function encodes occlusion from surrounding geometry, in all directions. Computing this function is difficult and point‐sampling approaches, such as ray‐tracing or hardware shadow mapping, are traditionally used to efficiently approximate it. We propose a semi‐analytic solution to the problem where the spherical silhouette of the visibility is computed using a search over a 4D dual mesh of the scene. Once computed, we are able to semi‐analytically integrate visibility‐masked spherical functions along the visibility silhouette, instead of over the entire hemisphere. In this way, we avoid the artefacts that arise from using point‐sampling strategies to integrate visibility, a function with unbounded frequency content. We demonstrate our approach on several applications, including direct illumination from realistic lighting and computation of pre‐computed radiance transfer data. Additionally, we present a new frequency‐space method for exactly computing all‐frequency shadows on diffuse surfaces. Our results match ground truth computed using importance‐sampled stratified Monte Carlo ray‐tracing, with comparable performance on scenes with low‐to‐moderate geometric complexity.Item Stackless Multi‐BVH Traversal for CPU, MIC and GPU Ray Tracing(The Eurographics Association and John Wiley and Sons Ltd., 2014) Áfra, Attila T.; Szirmay‐Kalos, László; Holly Rushmeier and Oliver DeussenStackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the MBVH, which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to four‐way branching. Our approach replaces the stack with a small bitmask, supports dynamic ordered traversal, and has a low computation overhead. We also present efficient implementation techniques for recent CPU, MIC (Intel Xeon Phi) and GPU (NVIDIA Kepler) architectures.Stackless traversal algorithms for ray tracing acceleration structures require significantly less storage per ray than ordinary stack‐based ones. This advantage is important for massively parallel rendering methods, where there are many rays in flight. On SIMD architectures, a commonly used acceleration structure is the multi bounding volume hierarchy (MBVH), which has multiple bounding boxes per node for improved parallelism. It scales to branching factors higher than two, for which, however, only stack‐based traversal methods have been proposed so far. In this paper, we introduce a novel stackless traversal algorithm for MBVHs with up to 4‐way branching.Item Photons: Evolution of a Course in Data Structures(The Eurographics Association and John Wiley and Sons Ltd., 2014) Duchowski, A. T.; Holly Rushmeier and Oliver DeussenThis paper presents the evolution of a data structures and algorithms course based on a specific computer graphics problem, namely, photon mapping, as the teaching medium. The paper reports development of the course through several iterations and evaluations, dating back 5 years. The course originated as a problem-based graphics course requiring sophomore students to implement Hoppe et al.'s algorithm for surface reconstruction from unorganized points found in their SIGGRAPH '92 paper of the same title. Although the solution to this problem lends itself well to an exploration of data structures and code modularization, both of which are traditionally taught in early computer science courses, the algorithm's complexity was reflected in students' overwhelmingly negative evaluations. Subsequently, because implementation of the kd-tree was seen as the linchpin data structure, it was again featured in the problem of ray tracing trees consisting of more than 250 000 000 triangles. Eventually, because the tree rendering was thought too specific a problem, the photon mapper was chosen as the semester-long problem considered to be a suitable replacement. This paper details the resultant course description and outline, from its now three semesters of teaching.This paper presents the evolution of a data structures and algorithms course based on a specific computer graphics problem, namely photon mapping, as the teaching medium. The paper reports development of the course through several iterations and evaluations, dating back five years.Item Controlled Metamorphosis Between Skeleton‐Driven Animated Polyhedral Meshes of Arbitrary Topologies(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kravtsov, Denis; Fryazinov, Oleg; Adzhiev, Valery; Pasko, Alexander; Comninos, Peter; Holly Rushmeier and Oliver DeussenEnabling animators to smoothly transform between animated meshes of differing topologies is a long‐standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field‐based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging‐skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape‐metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces and finally we inverse transition from the resulting metamorphosed implicit surface to the target mesh. The examples presented in this paper demonstrating the results of the proposed technique were implemented using an in‐house plug‐in for Maya™.Enabling animators to smoothly transform between animated meshes of differing topologies is a long‐standing problem in geometric modelling and computer animation. In this paper, we propose a new hybrid approach built upon the advantages of scalar field‐based models (often called implicit surfaces) which can easily change their topology by changing their defining scalar field. Given two meshes, animated by their rigging‐skeletons, we associate each mesh with its own approximating implicit surface. This implicit surface moves synchronously with the mesh. The shape‐metamorphosis process is performed in several steps: first, we collapse the two meshes to their corresponding approximating implicit surfaces, then we transform between the two implicit surfaces.Item Projection Mapping on Arbitrary Cubic Cell Complexes(The Eurographics Association and John Wiley and Sons Ltd., 2014) Apaza‐Agüero, K.; Silva, L.; Bellon, O. R. P. ; Holly Rushmeier and Oliver DeussenThis work presents a new representation used as a rendering primitive of surfaces. Our representation is defined by an arbitrary cubic cell complex: a projection‐based parameterization domain for surfaces where geometry and appearance information are stored as tile textures. This representation is used by our ray casting rendering algorithm called projection mapping, which can be used for rendering geometry and appearance details of surfaces from arbitrary viewpoints. The projection mapping algorithm uses a fragment shader based on linear and binary searches of the relief mapping algorithm. Instead of traditionally rendering the surface, only front faces of our rendering primitive (our arbitrary cubic cell complex) are drawn, and geometry and appearance details of the surface are rendered back by using projection mapping. Alternatively, another method is proposed for mapping appearance information on complex surfaces using our arbitrary cubic cell complexes. In this case, instead of reconstructing the geometry as in projection mapping, the original mesh of a surface is directly passed to the rendering algorithm. This algorithm is applied in the texture mapping of cultural heritage sculptures.This work presents a new representation used as a rendering primitive of surfaces. Our representation is defined by an arbitrary cubic cell complex: a projection‐based parameterization domain for surfaces where geometry and appearance information are stored as tile textures. This representation is used by our ray casting rendering algorithm called projection mapping, which can be used for rendering geometry and appearance details of surfaces from arbitrary viewpoints. Alternatively, another method is proposed for mapping appearance information on complex surfaces using our arbitrary cubic cell complexes. In this case, instead of reconstructing the geometry as in projection mapping, the original mesh of a surface is directly passed to the rendering algorithm.Item Occluder Simplification Using Planar Sections(The Eurographics Association and John Wiley and Sons Ltd., 2014) Silvennoinen, Ari; Saransaari, Hannu; Laine, Samuli; Lehtinen, Jaakko; Holly Rushmeier and Oliver DeussenWe present a method for extreme occluder simplification. We take a triangle soup as input, and produce a small set of polygons with closely matching occlusion properties. In contrast to methods that optimize the original geometry, our algorithm has very few requirements for the input— specifically, the input does not need to be a watertight, two‐manifold mesh. This robustness is achieved by working on a well‐behaved, discretized representation of the input instead of the original, potentially badly structured geometry. We first formulate the algorithm for individual occluders, and further introduce a hierarchy for handling large, complex scenes.We present a method for extreme occluder simplification. We take a triangle soup as input, and produce a small set of polygons with closely matching occlusion properties. In contrast to methods that optimize the original geometry, our algorithm has very few requirements for the input— specifically, the input does not need to be a watertight, two‐manifold mesh. This robustness is achieved by working on a well‐behaved, discretized representation of the input instead of the original, potentially badly structured geometry. We first formulate the algorithm for individual occluders, and further introduce a hierarchy for handling large, complex scenes.Item Boosting Techniques for Physics‐Based Vortex Detection(The Eurographics Association and John Wiley and Sons Ltd., 2014) Zhang, L.; Deng, Q.; Machiraju, R.; Rangarajan, A.; Thompson, D.; Walters, D. K.; Shen, H.‐W.; Holly Rushmeier and Oliver DeussenRobust automated vortex detection algorithms are needed to facilitate the exploration of large‐scale turbulent fluid flow simulations. Unfortunately, robust non‐local vortex detection algorithms are computationally intractable for large data sets and local algorithms, while computationally tractable, lack robustness. We argue that the deficiencies inherent to the local definitions occur because of two fundamental issues: the lack of a rigorous definition of a vortex and the fact that a vortex is an intrinsically non‐local phenomenon. As a first step towards addressing this problem, we demonstrate the use of machine learning techniques to enhance the robustness of local vortex detection algorithms. We motivate the presence of an expert‐in‐the‐loop using empirical results based on machine learning techniques. We employ adaptive boosting to combine a suite of widely used, local vortex detection algorithms, which we term weak classifiers, into a robust compound classifier. Fundamentally, the training phase of the algorithm, in which an expert manually labels small, spatially contiguous regions of the data, incorporates non‐local information into the resulting compound classifier. We demonstrate the efficacy of our approach by applying the compound classifier to two data sets obtained from computational fluid dynamical simulations. Our results demonstrate that the compound classifier has a reduced misclassification rate relative to the component classifiers.Robust automated vortex detection algorithms are needed to facilitate the exploration of large‐scale turbulent fluid flow simulations. Unfortunately, robust non‐local vortex detection algorithms are computationally intractable for large data sets and local algorithms, while computationally tractable, lack robustness. We argue that the deficiencies inherent to the local definitions occur because of two fundamental issues: the lack of a rigorous definition of a vortex and the fact that a vortex is an intrinsically non‐local phenomenon. As a first step towards addressing this problem, we demonstrate the use of machine learning techniques to enhance the robustness of local vortex detection algorithms.Item Appearance Stylization of Manhattan World Buildings(The Eurographics Association and John Wiley and Sons Ltd., 2014) Li, C.; Willis, P. J.; Brown, M.; Holly Rushmeier and Oliver DeussenWe propose a method that generates stylized building models from examples (Figure ). Our method only requires minimal user input to capture the appearance of a Manhattan world (MW) building, and can automatically retarget the captured ‘look and feel’ to new models. The key contribution is a novel representation, namely the ‘style sheet’, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models. We will demonstrate how to learn such style sheets from different MW buildings and the results of using them to generate novel models.We propose a method that generates stylized building models from examples. Our method only requires minimal user input to capture the appearance of a Manhattan world building, and can automatically retarget the captured “look and feel” to new models. The key contribution is a novel representation, namely the “style sheet”, that is captured independently from a building's structure. It summarizes characteristic shape and texture patterns on the building. In the retargeting stage, a style sheet is used to decorate new buildings of potentially different structures. Consistent face groups are proposed to capture complex texture patterns from the example model and to preserve the patterns in the retarget models.Item Editorial(The Eurographics Association and Blackwell Publishing Ltd., 2014) Holly Rushmeier and Oliver DeussenItem Subdivision Surfaces with Creases and Truncated Multiple Knot Lines(The Eurographics Association and John Wiley and Sons Ltd., 2014) Kosinka, J.; Sabin, M. A.; Dodgson, N. A.; Holly Rushmeier and Oliver DeussenWe deal with subdivision schemes based on arbitrary degree B‐splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B‐splines and NURBS systems.We deal with subdivision schemes based on arbitrary degree B‐splines. We focus on extraordinary knots which exhibit various levels of complexity in terms of both valency and multiplicity of knot lines emanating from such knots. The purpose of truncated multiple knot lines is to model creases which fair out. Our construction supports any degree and any knot line multiplicity and provides a modelling framework familiar to users used to B‐splines and NURBS systems.