SCA 17: Eurographics/SIGGRAPH Symposium on Computer Animation
Permanent URI for this collection
Browse
Browsing SCA 17: Eurographics/SIGGRAPH Symposium on Computer Animation by Subject "Animation"
Now showing 1 - 3 of 3
Results Per Page
Sort Options
Item Designing Cable-Driven Actuation Networks for Kinematic Chains and Trees(ACM, 2017) Megaro, Vittorio; Knoop, Espen; Spielberg, Andrew; Levin, David I.W.; Matusik, Wojciech; Gross, Markus; Thomaszewski, Bernhard; Bächer, Moritz; Bernhard Thomaszewski and KangKang Yin and Rahul NarainIn this paper we present an optimization-based approach for the design of cable-driven kinematic chains and trees. Our system takes as input a hierarchical assembly consisting of rigid links jointed together with hinges. The user also specifies a set of target poses or keyframes using inverse kinematics. Our approach places torsional springs at the joints and computes a cable network that allows us to reproduce the specified target poses. We start with a large set of cables that have randomly chosen routing points and we gradually remove the redundancy. Then we refine the routing points taking into account the path between poses or keyframes in order to further reduce the number of cables and minimize required control forces. We propose a reduced coordinate formulation that links control forces to joint angles and routing points, enabling the co-optimization of a cable network together with the required actuation forces. We demonstrate the efficacy of our technique by designing and fabricating a cable-driven, animated character, an animatronic hand, and a specialized gripper.Item Inequality Cloth(ACM, 2017) Jin, Ning; Lu, Wenlong; Geng, Zhenglin; Fedkiw, Ronald P.; Bernhard Thomaszewski and KangKang Yin and Rahul NarainAs has been noted and discussed by various authors, numerical simulations of deformable bodies often adversely suffer from so-called ''locking'' artifacts. We illustrate that the ''locking'' of out-of-plane bending motion that results from even an edge-spring-only cloth simulation can be quite severe, noting that the typical remedy of softening the elastic model leads to an unwanted rubbery look. We demonstrate that this ''locking'' is due to the well-accepted notion that edge springs in the cloth mesh should preserve their lengths, and instead propose an inequality constraint that stops edges from stretching while allowing for edge compression as a surrogate for bending. Notably, this also allows for the capturing of bending modes at scales smaller than those which could typically be represented by the mesh. Various authors have recently begun to explore optimization frameworks for deformable body simulation, which is particularly germane to our inequality cloth framework. After exploring such approaches, we choose a particular approach and illustrate its feasibility in a number of scenarios including contact, collision, and self-collision. Our results demonstrate the efficacy of the inequality approach when it comes to folding, bending, and wrinkling, especially on coarser meshes, thus opening up a plethora of interesting possibilities.Item Production-Level Facial Performance Capture Using Deep Convolutional Neural Networks(ACM, 2017) Laine, Samuli; Karras, Tero; Aila, Timo; Herva, Antti; Saito, Shunsuke; Yu, Ronald; Li, Hao; Lehtinen, Jaakko; Bernhard Thomaszewski and KangKang Yin and Rahul NarainWe present a real-time deep learning framework for video-based facial performance capture-the dense 3D tracking of an actor's face given a monocular video. Our pipeline begins with accurately capturing a subject using a high-end production facial capture pipeline based on multi-view stereo tracking and artist-enhanced animations. With 5-10 minutes of captured footage, we train a convolutional neural network to produce high-quality output, including self-occluded regions, from a monocular video sequence of that subject. Since this 3D facial performance capture is fully automated, our system can drastically reduce the amount of labor involved in the development of modern narrative-driven video games or films involving realistic digital doubles of actors and potentially hours of animated dialogue per character.We compare our results with several state-of-the-art monocular real-time facial capture techniques and demonstrate compelling animation inference in challenging areas such as eyes and lips.