Expressive + WICED 2025
Permanent URI for this collection
Browse
Browsing Expressive + WICED 2025 by Subject "Animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Evaluating Temporal Coherence using a Watercolor Renderer(The Eurographics Association, 2025) Morgan, Ingrid Ellen Carr; Billeter, Markus; Anjyo, Ken; Anjos, Rafael Kuffner dos; Berio, Daniel; Bruckert, AlexandreTemporal coherence is a long-standing issue within Non-Photorealistic Rendering (NPR). The problem has been defined as a trade-off between the three main factors: flatness, temporal continuity and motion coherence. Approaches that improve temporal coherence are applied across different styles within diverse animation contexts. We have implemented a watercolour renderer that supports multiple temporal coherence approaches in a unified system to investigate this trade-off. The approaches are then evaluated against existing work, with consideration of how external factors including animations and textures may influence perceived incoherence.Item MVAE : Motion-conditioned Variational Auto-Encoder for tailoring character animations(The Eurographics Association, 2025) Bordier, Jean-Baptiste; Christie, Marc; Catalano, Chiara Eva; Parakkat, Amal DevThe design of character animations with enough diversity is a time-consuming task in many productions such as video games or animated films, and drives the need for more simple and effective authoring systems. This paper introduces a novel approach, a motion-conditioned variational autoencoder (VAE) with Virtual reality as a motion capture device. Our model generates diverse humanoid character animations only based on a gesture captured from two Virtual reality controllers, allowing for precise control of motion characteristics such as rhythm, speed and amplitude, and providing variability through noise sampling. From a dataset comprising paired controller-character motions, we design and train our VAE to (i) identify global motion characteristics from the movement, in order to discern the type of animation desired by the user, and (ii) identify local motion characteristics including rhythm, velocity, and amplitude to adapt the animation to these characteristics. Unlike many text-tomotion approaches, our method faces the challenge of interpreting high-dimensional, non-discrete user inputs. Our model maps these inputs into the higher-dimensional space of character animation while leveraging motion characteristics (such as height, speed, walking step frequency, and amplitude) to fine-tune the generated motion. We demonstrate the relevance of the approach on a number of examples and illustrate how changes in rhythm and amplitude of the input motions are transferred to coherent changes in the animated character, while offering a diversity of results using different noise samples.