Browsing by Author "Holden, Daniel"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Subspace Neural Physics: Fast Data-Driven Interactive Simulation(ACM, 2019) Holden, Daniel; Duong, Bang Chi; Datta, Sayantan; Nowrouzezahrai, Derek; Batty, Christopher and Huang, JinData-driven methods for physical simulation are an attractive option for interactive applications due to their ability to trade precomputation and memory footprint in exchange for improved runtime performance. Yet, existing data-driven methods fall short of the extreme memory and performance constraints imposed by modern interactive applications like AAA games and virtual reality. Here, performance budgets for physics simulation range from tens to hundreds of micro-seconds per frame, per object. We present a data-driven physical simulation method that meets these constraints. Our method combines subspace simulation techniques with machine learning which, when coupled, enables a very efficient subspace-only physics simulation that supports interactions with external objects - a longstanding challenge for existing subspace techniques. We also present an interpretation of our method as a special case of subspace Verlet integration, where we apply machine learning to efficiently approximate the physical forces of the system directly in the subspace. We propose several practical solutions required to make effective use of such a model, including a novel training methodology required for prediction stability, and a GPU-friendly subspace decompression algorithm to accelerate rendering.Item ZeroEGGS: Zero‐shot Example‐based Gesture Generation from Speech(Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2023) Ghorbani, Saeed; Ferstl, Ylva; Holden, Daniel; Troje, Nikolaus F.; Carbonneau, Marc‐André; Hauser, Helwig and Alliez, PierreWe present ZeroEGGS, a neural network framework for speech‐driven gesture generation with zero‐shot style control by example. This means style can be controlled via only a short example motion clip, even for motion styles unseen during training. Our model uses a Variational framework to learn a style embedding, making it easy to modify style through latent space manipulation or blending and scaling of style embeddings. The probabilistic nature of our framework further enables the generation of a variety of outputs given the input, addressing the stochastic nature of gesture motion. In a series of experiments, we first demonstrate the flexibility and generalizability of our model to new speakers and styles. In a user study, we then show that our model outperforms previous state‐of‐the‐art techniques in naturalness of motion, appropriateness for speech, and style portrayal. Finally, we release a high‐quality dataset of full‐body gesture motion including fingers, with speech, spanning across 19 different styles. Our code and data are publicly available at .