35-Issue 7
Permanent URI for this collection
Browse
Browsing 35-Issue 7 by Issue Date
Now showing 1 - 20 of 52
Results Per Page
Sort Options
Item Direct Shape Optimization for Strengthening 3D Printable Objects(The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhou, Yahan; Kalogerakis, Evangelos; Wang, Rui; Grosse, Ian R.; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiRecently there has been an increasing demand for software that can help designers create functional 3D objects with required physical strength. We introduce a generic and extensible method that directly optimizes a shape subject to physical and geometric constraints. Given an input shape, our method optimizes directly its input mesh representation until it can withstand specified external forces, while remaining similar to the original shape. Our method performs physics simulation and shape optimization together in a unified framework, where the physics simulator is an integral part of the optimizer. We employ geometric constraints to preserve surface details and shape symmetry, and adapt a second-order method with analytic gradients to improve convergence and computation time. Our method provides several advantages over previous work, including the ability to handle general shape deformations, preservation of surface details, and incorporation of user-defined constraints. We demonstrate the effectiveness of our method on a variety of printable 3D objects through detailed simulations as well as physical validations.Item Minimal Sampling for Effective Acquisition of Anisotropic BRDFs(The Eurographics Association and John Wiley & Sons Ltd., 2016) Vávra, Radomir; Filip, Jiri; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiBRDFs are commonly used for material appearance representation in applications ranging from gaming and the movie industry, to product design and specification. Most applications rely on isotropic BRDFs due to their better availability as a result of their easier acquisition process. On the other hand, anisotropic BRDF due to their structure-dependent anisotropic highlights, are more challenging to measure and process. This paper thus leverages the measurement process of anisotropic BRDF by representing such BRDF by the collection of isotropic BRDFs. Our method relies on an anisotropic BRDF database decomposition into training isotropic slices forming a linear basis, where appropriate sparse samples are identified using numerical optimization. When an unknown anisotropic BRDF is measured, these samples are repeatably captured in a small set of azimuthal directions. All collected samples are then used for an entire measured BRDF reconstruction from a linear isotropic basis. Typically, below 100 samples are sufficient for the capturing of main visual features of complex anisotropic materials, and we provide a minimal directional samples to be regularly measured at each sample rotation. We conclude, that even simple setups relying on five bidirectional samples (maximum of five stationary sensors/lights) in combination with eight rotations (rotation stage for specimen) can yield a promising reconstruction of anisotropic behavior. Next, we outline extension of the proposed approach to adaptive sampling of anisotropic BRDF to gain even better performance. Finally, we show that our method allows using standard geometries, including industrial multi-angle reflectometers, for the fast measurement of anisotropic BRDFs.Item Spatial Matching of Animated Meshes(The Eurographics Association and John Wiley & Sons Ltd., 2016) Seo, Hyewon; Cordier, Frederic; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiThis paper presents a new technique which makes use of deformation and motion properties between animated meshes for finding their spatial correspondences. Given a pair of animated meshes exhibiting a semantically similar motion, we compute a sparse set of feature points on each mesh and compute spatial correspondences among them so that points with similar motion behavior are put in correspondence. At the core of our technique is our new, dynamic feature descriptor named AnimHOG, which encodes local deformation characteristics. AnimHOG is ob-tained by computing the gradient of a scalar field inside the spatiotemporal neighborhood of a point of interest, where the scalar values are obtained from the deformation characteristic associated with each vertex and at each frame. The final matching has been formulated as a discreet optimization problem that finds the matching of each feature point on the source mesh so that the descriptor similarity between the corresponding feature pairs as well as compatibility and consistency as measured across the pairs of correspondences are maximized. Consequently, reliable correspondences can be found even among the meshes of very different shape, as long as their motions are similar. We demonstrate the performance of our technique by showing the good quality of matching results we obtained on a number of animated mesh pairs.Item Flow Curves: an Intuitive Interface for Coherent Scene Deformation(The Eurographics Association and John Wiley & Sons Ltd., 2016) Ciccone, Loïc; Guay, Martin; Sumner, Robert W.; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiEffective composition in visual arts relies on the principle of movement, where the viewer's eye is directed along subjective curves to a center of interest. We call these curves subjective because they may span the edges and/or center-lines of multiple objects, as well as contain missing portions which are automatically filled by our visual system. By carefully coordinating the shape of objects in a scene, skilled artists direct the viewer's attention via strong subjective curves. While traditional 2D sketching is a natural fit for this task, current 3D tools are object-centric and do not accommodate coherent deformation of multiple shapes into smooth flows. We address this shortcoming with a new sketch-based interface called Flow Curves which allows coordinating deformation across multiple objects. Core components of our method include an understanding of the principle of flow, algorithms to automatically identify subjective curve elements that may span multiple disconnected objects, and a deformation representation tailored to the view-dependent nature of scene movement. As demonstrated in our video, sketching flow curves requires significantly less time than using traditional 3D editing workflows.Item Merged Multiresolution Hierarchies for Shadow Map Compression(The Eurographics Association and John Wiley & Sons Ltd., 2016) Scandolo, Leonardo; Bauszat, Pablo; Eisemann, Elmar; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiMultiresolution Hierarchies (MH) and Directed Acyclic Graphs (DAG) are two recent approaches for the compression of highresolution shadow information. In this paper, we introduce Merged Multiresolution Hierarchies (MMH), a novel data structure that unifies both concepts. An MMH leverages both hierarchical homogeneity exploited in MHs, as well as topological similarities exploited in DAG representations. We propose an efficient hash-based technique to quickly identify and remove redundant subtree instances in a modified relative MH representation. Our solution remains lossless and significantly improves the compression rate compared to both preceding shadow map compression algorithms, while retaining the full run-time performance of traditional MH representations.Item PG 2016: Frontmatter(Eurographics Association, 2016) Eitan Grinspun; Bernd Bickel; Yoshinori Dobashi;Item Anisotropic Superpixel Generation Based on Mahalanobis Distance(The Eurographics Association and John Wiley & Sons Ltd., 2016) Cai, Yiqi; Guo, Xiaohu; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiSuperpixels have been widely used as a preprocessing step in various computer vision tasks. Spatial compactness and color homogeneity are the two key factors determining the quality of the superpixel representation. In this paper, these two objectives are considered separately and anisotropic superpixels are generated to better adapt to local image content. We develop a unimodular Gaussian generative model to guide the color homogeneity within a superpixel by learning local pixel color variations. It turns out maximizing the log-likelihood of our generative model is equivalent to solving a Centroidal Voronoi Tessellation (CVT) problem. Moreover, we provide the theoretical guarantee that the CVT result is invariant to affine illumination change, which makes our anisotropic superpixel generation algorithm well suited for image/video analysis in varying illumination environment. The effectiveness of our method in image/video superpixel generation is demonstrated through the comparison with other state-of-the-art methods.Item Re-Compositable Panoramic Selfie with Robust Multi-Frame Segmentation and Stitching(The Eurographics Association and John Wiley & Sons Ltd., 2016) Li, Kai; Wang, Jue; Liu, Yebin; Xu, Li; Dai, Qionghai; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiIt is a challenging task for ordinary users to capture selfies with a good scene composition, given the limited freedom to position the camera. Creative hardware (e.g., selfie sticks) and software (e.g., panoramic selfie apps) solutions have been proposed to extend the background coverage of a selife, but to achieve a perfect composition on the spot when the selfie is captured remains to be difficult. In this paper, we propose a system that allows the user to shoot a selfie video by rotating the body first, then produce a final panoramic selfie image with user-guided scene composition as postprocessing. Our key technical contribution is a fully automatic, robust multi-frame segmentation and stitching framework that is tailored towards the special characteristics of selfie images. We analyze the sparse feature points and employ a spatial-temporal optimization for bilayer feature segmentation, which leads to more reliable background alignment than previous image stitching techniques. The sparse classification is then propagated to all pixels to create dense foreground masks for person-background composition. Finally, based on a user-selected foreground position, our system uses content-preserving warping to produce a panoramic seflie with minimal distortion to the face region. Experimental results show that our approach can reliably generate high quality panoramic selfies, while a simple combination of previous image stitching and segmentation approaches often fails.Item An Interactive Design System of Free-Formed Bamboo-Copters(The Eurographics Association and John Wiley & Sons Ltd., 2016) Nakamura, Morihiro; Koyama, Yuki; Sakamoto, Daisuke; Igarashi, Takeo; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiWe present an interactive design system for designing free-formed bamboo-copters, where novices can easily design freeformed, even asymmetric bamboo-copters that successfully fly. The designed bamboo-copters can be fabricated using digital fabrication equipment, such as a laser cutter. Our system provides two useful functions for facilitating this design activity. First, it visualizes a simulated flight trajectory of the current bamboo-copter design, which is updated in real time during the user's editing. Second, it provides an optimization function that automatically tweaks the current bamboo-copter design such that the spin quality-how stably it spins-and the flight quality-how high and long it flies-are enhanced. To enable these functions, we present non-trivial extensions over existing techniques for designing free-formed model airplanes [UKSI14], including a wing discretization method tailored to free-formed bamboo-copters and an optimization scheme for achieving stable bamboocopters considering both spin and flight qualities.Item An Error Estimation Framework for Many-Light Rendering(The Eurographics Association and John Wiley & Sons Ltd., 2016) Nabata, Kosuke; Iwasaki, Kei; Dobashi, Yoshinori; Nishita, Tomoyuki; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiThe popularity of many-light rendering, which converts complex global illumination computations into a simple sum of the illumination from virtual point lights (VPLs), for predictive rendering has increased in recent years. A huge number of VPLs are usually required for predictive rendering at the cost of extensive computational time. While previous methods can achieve significant speedup by clustering VPLs, none of these previous methods can estimate the total errors due to clustering. This drawback imposes on users tedious trial and error processes to obtain rendered images with reliable accuracy. In this paper, we propose an error estimation framework for many-light rendering. Our method transforms VPL clustering into stratified sampling combined with confidence intervals, which enables the user to estimate the error due to clustering without the costly computing required to sum the illumination from all the VPLs. Our estimation framework is capable of handling arbitrary BRDFs and is accelerated by using visibility caching, both of which make our method more practical. The experimental results demonstrate that our method can estimate the error much more accurately than the previous clustering method.Item Efficient Multi-image Correspondences for On-line Light Field Video Processing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Dąbała, Łukasz; Ziegler, Matthias; Didyk, Piotr; Zilly, Frederik; Keinert, Joachim; Myszkowski, Karol; Seidel, Hans-Peter; Rokita, Przemysław; Ritschel, Tobias; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiLight field videos express the entire visual information of an animated scene, but their shear size typically makes capture, processing and display an off-line process, i. e., time between initial capture and final display is far from real-time. In this paper we propose a solution for one of the key bottlenecks in such a processing pipeline, which is a reliable depth reconstruction possibly for many views. This is enabled by a novel correspondence algorithm converting the video streams from a sparse array of off-the-shelf cameras into an array of animated depth maps. The algorithm is based on a generalization of the classic multi-resolution Lucas-Kanade correspondence algorithm from a pair of images to an entire array. Special inter-image confidence consolidation allows recovery from unreliable matching in some locations and some views. It can be implemented efficiently in massively parallel hardware, allowing for interactive computations. The resulting depth quality as well as the computation performance compares favorably to other state-of-the art light field-to-depth approaches, as well as stereo matching techniques. Another outcome of this work is a data set of light field videos that are captured with multiple variants of sparse camera arrays.Item Visual Contrast Sensitivity and Discrimination for 3D Meshes and their Applications(The Eurographics Association and John Wiley & Sons Ltd., 2016) Nader, Georges; Wang, Kai; Hétroy-Wheeler, Franck; Dupont, Florent; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiIn this paper, we first introduce an algorithm for estimating the visual contrast on a 3D mesh. We then perform a series of psychophysical experiments to study the effects of contrast sensitivity and contrast discrimination of the human visual system for the task of differentiating between two contrasts on a 3D mesh. The results of these experiments allow us to propose a perceptual model that is able to predict whether a change in local contrast on 3D mesh, induced by a local geometric distortion, is visible or not. Finally, we illustrate the utility of the proposed perceptual model in a number of applications: we compute the Just Noticeable Distortion (JND) profile for smooth-shaded 3D meshes and use the model to guide mesh processing algorithms.Item Piecewise-planar Reconstruction of Multi-room Interiors with Arbitrary Wall Arrangements(The Eurographics Association and John Wiley & Sons Ltd., 2016) Mura, Claudio; Mattausch, Oliver; Pajarola, Renato; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiReconstructing the as-built architectural shape of building interiors has emerged in recent years as an important and challenging research problem. An effective approach must be able to faithfully capture the architectural structures and separate permanent components from clutter (e.g. furniture), while at the same time dealing with defects in the input data. For many applications, higher-level information on the environment is also required, in particular the shape of individual rooms. To solve this ill-posed problem, state-of-the-art methods assume constrained input environments with a 2.5D or, more restrictively, a Manhattan-world structure, which significantly restricts their applicability in real-world settings. We present a novel pipeline that allows to reconstruct general 3D interior architectures, significantly increasing the range of real-world architectures that can be reconstructed and labeled by any interior reconstruction method to date. Our method finds candidate permanent components by reasoning on a graph-based scene representation, then uses them to build a 3D linear cell complex that is partitioned into separate rooms through a multi-label energy minimization formulation. We demonstrate the effectiveness of our method by applying it to a variety of real-world and synthetic datasets and by comparing it to more specialized state-of-the-art approaches.Item Real-time Texture Synthesis and Concurrent Random-access Rendering for Low-cost GPU Chip Design(The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhang, Linling; Fenney, Simon; Escribano, Fernando; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiNumerous algorithms have been researched in the area of texture synthesis. However, it remains difficult to design a low-cost synthesis scheme capable of generating high quality results while simultaneously achieving real-time performance. Additional challenges include making a scheme parallel and being able to partially render/synthesize high-resolution textures. Furthermore, it would be beneficial for a synthesis scheme to be able to incorporate Texture Compression and minimize the bandwidth usage, especially on mobile devices. In this paper, we propose a practical method which has low computational complexity and produces textures with small storage requirements. Through use of an index table, random access of the texture is another essential advantage, with which parallel rendering becomes feasible including generation of mip-map sequences. Integrating the index table with existing compression algorithms, for example ETC or PVRTC, the bandwidth is further reduced and avoids the need for a separate, computationally expensive pass to compress the synthesized output. It should be noted that our texture synthesis achieves real-time performance and low power consumption even on mobile devices, for which texture synthesis has been traditionally considered too expensive.Item Trip Synopsis: 60km in 60sec(The Eurographics Association and John Wiley & Sons Ltd., 2016) Huang, Hui; Lischinski, Dani; Hao, Zhuming; Gong, Minglun; Christie, Marc; Cohen-Or, Daniel; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiComputerized route planning tools are widely used today by travelers all around the globe, while 3D terrain and urban models are becoming increasingly elaborate and abundant. This makes it feasible to generate a virtual 3D flyby along a planned route. Such a flyby may be useful, either as a preview of the trip, or as an after-the-fact visual summary. However, a naively generated preview is likely to contain many boring portions, while skipping too quickly over areas worthy of attention. In this paper, we introduce 3D trip synopsis: a continuous visual summary of a trip that attempts to maximize the total amount of visual interest seen by the camera. The main challenge is to generate a synopsis of a prescribed short duration, while ensuring a visually smooth camera motion. Using an application-specific visual interest metric, we measure the visual interest at a set of viewpoints along an initial camera path, and maximize the amount of visual interest seen in the synopsis by varying the speed along the route. A new camera path is then computed using optimization to simultaneously satisfy requirements, such as smoothness, focus and distance to the route. The process is repeated until convergence. The main technical contribution of this work is a new camera control method, which iteratively adjusts the camera trajectory and determines all of the camera trajectory parameters, including the camera position, altitude, heading, and tilt. Our results demonstrate the effectiveness of our trip synopses, compared to a number of alternatives.Item Foveated Real-Time Ray Tracing for Head-Mounted Displays(The Eurographics Association and John Wiley & Sons Ltd., 2016) Weier, Martin; Roth, Thorsten; Kruijff, Ernst; Hinkenjann, André; Pérard-Gayot, Arsène; Slusallek, Philipp; Li, Yongmin; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiHead-mounted displays with dense pixel arrays used for virtual reality applications require high frame rates and low latency rendering. This forms a challenging use case for any rendering approach. In addition to its ability of generating realistic images, ray tracing offers a number of distinct advantages, but has been held back mainly by its performance. In this paper, we present an approach that significantly improves image generation performance of ray tracing. This is done by combining foveated rendering based on eye tracking with reprojection rendering using previous frames in order to drastically reduce the number of new image samples per frame. To reproject samples a coarse geometry is reconstructed from a G-Buffer. Possible errors introduced by this reprojection as well as parts that are critical to the perception are scheduled for resampling. Additionally, a coarse color buffer is used to provide an initial image, refined smoothly by more samples were needed. Evaluations and user tests show that our method achieves real-time frame rates, while visual differences compared to fully rendered images are hardly perceivable. As a result, we can ray trace non-trivial static scenes for the Oculus DK2 HMD at 1182x1464 per eye within the the VSync limits without perceived visual differences.Item Non-Local Sparse and Low-Rank Regularization for Structure-Preserving Image Smoothing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Zhu, Lei; Fu, Chi-Wing; Jin, Yueming; Wei, Mingqiang; Qin, Jing; Heng, Pheng-Ann; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiThis paper presents a new image smoothing method that better preserves prominent structures. Our method is inspired by the recent non-local image processing techniques on the patch grouping and filtering. Overall, it has three major contributions over previous works. First, we employ the diffusion map as the guidance image to improve the accuracy of patch similarity estimation using the region covariance descriptor. Second, we model structure-preserving image smoothing as a low-rank matrix recovery problem, aiming at effectively filtering the texture information in similar patches. Lastly, we devise an objective function, namely the weighted robust principle component analysis (WRPCA), by regularizing the low rank with the weighted nuclear norm and sparsity pursuit with L1 norm, and solve this non-convex WRPCA optimization problem by adopting the alternative direction method of multipliers (ADMM) technique. We experiment our method with a wide variety of images and compare it against several state-of-the-art methods. The results show that our method achieves better structure preservation and texture suppression as compared to other methods. We also show the applicability of our method on several image processing tasks such as edge detection, texture enhancement and seam carving.Item Skeleton-driven Adaptive Hexahedral Meshing of Tubular Shapes(The Eurographics Association and John Wiley & Sons Ltd., 2016) Livesu, Marco; Muntoni, Alessandro; Puppo, Enrico; Scateni, Riccardo; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiWe propose a novel method for the automatic generation of structured hexahedral meshes of articulated 3D shapes. We recast the complex problem of generating the connectivity of a hexahedral mesh of a general shape into the simpler problem of generating the connectivity of a tubular structure derived from its curve-skeleton. We also provide volumetric subdivision schemes to nicely adapt the topology of the mesh to the local thickness of tubes, while regularizing per-element size. Our method is fast, one-click, easy to reproduce, and it generates structured meshes that better align to the branching structure of the input shape if compared to previous methods for hexa mesh generation.Item Retargeting 3D Objects and Scenes with a General Framework(The Eurographics Association and John Wiley & Sons Ltd., 2016) Huang, Chun-Kai; Chen, Yi-Ling; Shen, I-Chao; Chen, Bing-Yu; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiIn this paper, we introduce an interactive method suitable for retargeting both 3D objects and scenes. Initially, the input object or scene is decomposed into a collection of constituent components enclosed by corresponding control bounding volumes which capture the intra-structures of the object or semantic grouping of objects in the 3D scene. The overall retargeting is accomplished through a constrained optimization by manipulating the control bounding volumes. Without inferring the intricate dependencies between the components, we define a minimal set of constraints that maintain the spatial arrangement and connectivity between the components to regularize the valid retargeting results. The default retargeting behavior can then be easily altered by additional semantic constraints imposed by users. This strategy makes the proposed method highly flexible to process a wide variety of 3D objects and scenes under an unified framework. In addition, the proposed method achieved more general structure-preserving pattern synthesis in both object and scene levels. We demonstrate the effectiveness of our method by applying it to several complicated 3D objects and scenes.Item An Efficient Structure-Aware Bilateral Texture Filtering for Image Smoothing(The Eurographics Association and John Wiley & Sons Ltd., 2016) Lin, Ting-Hao; Way, Der-Lor; Shih, Zen-Chung; Tai, Wen-Kai; Chang, Chin-Chen; Eitan Grinspun and Bernd Bickel and Yoshinori DobashiPhotos contain well-structured and plentiful visual information. Edges are active and expressive stimuli for human visual perception. However, it is hard to separate structure from details because edge strength and object scale are entirely different concepts. This paper proposes a structure-aware bilateral texture algorithm to remove texture patterns and preserve structures. Our proposed method is simple and fast, as well as effective in removing textures. Instead of patch shift, smaller patches represent pixels located at structure edges, and original patches represent the texture regions. Specifically, this paper also improves joint bilateral filter to preserve small structures. Moreover, a windowed inherent variation is adapted to distinguish textures and structures for detecting structure edges. Finally, the proposed method produces excellent experimental results. These results are compared to some results of previous studies. Besides, structure-preserving filtering is a critical operation in many image processing applications. Our proposed filter is also demonstrated in many attractive applications, such as seam carving, detail enhancement, artistic rendering, etc.
- «
- 1 (current)
- 2
- 3
- »