Volume 43 (2024)
Permanent URI for this community
Browse
Browsing Volume 43 (2024) by Subject "animation"
Now showing 1 - 2 of 2
Results Per Page
Sort Options
Item Infinite 3D Landmarks: Improving Continuous 2D Facial Landmark Detection(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Chandran, P.; Zoss, G.; Gotardo, P.; Bradley, D.; Alliez, Pierre; Wimmer, MichaelIn this paper, we examine three important issues in the practical use of state‐of‐the‐art facial landmark detectors and show how a combination of specific architectural modifications can directly improve their accuracy and temporal stability. First, many facial landmark detectors require a face normalization step as a pre‐process, often accomplished by a separately trained neural network that crops and resizes the face in the input image. There is no guarantee that this pre‐trained network performs optimal face normalization for the task of landmark detection. Thus, we instead analyse the use of a spatial transformer network that is trained alongside the landmark detector in an unsupervised manner, jointly learning an optimal face normalization and landmark detection by a single neural network. Second, we show that modifying the output head of the landmark predictor to infer landmarks in a canonical 3D space rather than directly in 2D can further improve accuracy. To convert the predicted 3D landmarks into screen‐space, we additionally predict the camera intrinsics and head pose from the input image. As a side benefit, this allows to predict the 3D face shape from a given image only using 2D landmarks as supervision, which is useful in determining landmark visibility among other things. Third, when training a landmark detector on multiple datasets at the same time, annotation inconsistencies across datasets forces the network to produce a sub‐optimal average. We propose to add a semantic correction network to address this issue. This additional lightweight neural network is trained alongside the landmark detector, without requiring any additional supervision. While the insights of this paper can be applied to most common landmark detectors, we specifically target a recently proposed continuous 2D landmark detector to demonstrate how each of our additions leads to meaningful improvements over the state‐of‐the‐art on standard benchmarks.Item Simplified Physical Model‐based Balance‐preserving Motion Re‐targeting for Physical Simulation(© 2024 Eurographics ‐ The European Association for Computer Graphics and John Wiley & Sons Ltd., 2024) Hwang, Jaepyung; Ishii, Shin; Alliez, Pierre; Wimmer, MichaelIn this study, we propose a novel motion re‐targeting framework that provides natural motions of target robot character models similar to the given source motions of a different skeletal structure. The natural target motion requires satisfying kinematic constraints to show a similar motion to the source motion although the kinematical structure between the source and the target character models differ from each other. Simultaneously, the target motion should maintain physically plausible features such as keeping the balance of the target character model. To handle the issue, we utilize a simple physics model (an inverted‐pendulum‐on‐a‐cart model) during the motion re‐targeting process. By interpreting the source motion's balancing property via the pendulum model, the target motion inherits the balancing property of the source motion. The inheritance is derived by performing the motion analysis to extract the necessary parameters for re‐targeting the pendulum model's motion pattern and parameter learning to estimate the suitable parameters for the target character model. Based on the simple physics inheritance, the proposed framework provides balance‐preserving target motions, even applicable to the full‐body physics simulation or a real robot control. We validate the framework by experimenting with motion re‐targeting from animal character and human character source models to the quadruped‐ and humanoid‐type target models with Muaythai punching, kicking and walking motions. We also implement comparisons with the existing methods to clarify the enhancement.