Browsing by Author "Huang, Haibin"
Now showing 1 - 4 of 4
Results Per Page
Sort Options
Item 3D Keypoint Estimation Using Implicit Representation Learning(The Eurographics Association and John Wiley & Sons Ltd., 2023) Zhu, Xiangyu; Du, Dong; Huang, Haibin; Ma, Chongyang; Han, Xiaoguang; Memari, Pooran; Solomon, JustinIn this paper, we tackle the challenging problem of 3D keypoint estimation of general objects using a novel implicit representation. Previous works have demonstrated promising results for keypoint prediction through direct coordinate regression or heatmap-based inference. However, these methods are commonly studied for specific subjects, such as human bodies and faces, which possess fixed keypoint structures. They also suffer in several practical scenarios where explicit or complete geometry is not given, including images and partial point clouds. Inspired by the recent success of advanced implicit representation in reconstruction tasks, we explore the idea of using an implicit field to represent keypoints. Specifically, our key idea is employing spheres to represent 3D keypoints, thereby enabling the learnability of the corresponding signed distance field. Explicit keypoints can be extracted subsequently by our algorithm based on the Hough transform. Quantitative and qualitative evaluations also show the superiority of our representation in terms of prediction accuracy.Item Implicit Neural Deformation for Sparse-View Face Reconstruction(The Eurographics Association and John Wiley & Sons Ltd., 2022) Li, Moran; Huang, Haibin; Zheng, Yi; Li, Mengtian; Sang, Nong; Ma, Chongyang; Umetani, Nobuyuki; Wojtan, Chris; Vouga, EtienneIn this work, we present a new method for 3D face reconstruction from sparse-view RGB images. Unlike previous methods which are built upon 3D morphable models (3DMMs) with limited details, we leverage an implicit representation to encode rich geometric features. Our overall pipeline consists of two major components, including a geometry network, which learns a deformable neural signed distance function (SDF) as the 3D face representation, and a rendering network, which learns to render on-surface points of the neural SDF to match the input images via self-supervised optimization. To handle in-the-wild sparse-view input of the same target with different expressions at test time, we propose residual latent code to effectively expand the shape space of the learned implicit face representation as well as a novel view-switch loss to enforce consistency among different views. Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.Item Multi-Modal Face Stylization with a Generative Prior(The Eurographics Association and John Wiley & Sons Ltd., 2023) Li, Mengtian; Dong, Yi; Lin, Minxuan; Huang, Haibin; Wan, Pengfei; Ma, Chongyang; Chaine, Raphaëlle; Deng, Zhigang; Kim, Min H.In this work, we introduce a new approach for face stylization. Despite existing methods achieving impressive results in this task, there is still room for improvement in generating high-quality artistic faces with diverse styles and accurate facial reconstruction. Our proposed framework, MMFS, supports multi-modal face stylization by leveraging the strengths of StyleGAN and integrates it into an encoder-decoder architecture. Specifically, we use the mid-resolution and high-resolution layers of StyleGAN as the decoder to generate high-quality faces, while aligning its low-resolution layer with the encoder to extract and preserve input facial details. We also introduce a two-stage training strategy, where we train the encoder in the first stage to align the feature maps with StyleGAN and enable a faithful reconstruction of input faces. In the second stage, the entire network is fine-tuned with artistic data for stylized face generation. To enable the fine-tuned model to be applied in zero-shot and one-shot stylization tasks, we train an additional mapping network from the large-scale Contrastive-Language-Image-Pre-training (CLIP) space to a latent w+ space of fine-tuned StyleGAN. Qualitative and quantitative experiments show that our framework achieves superior performance in both one-shot and zero-shot face stylization tasks, outperforming state-of-the-art methods by a large margin.Item UprightRL: Upright Orientation Estimation of 3D Shapes via Reinforcement Learning(The Eurographics Association and John Wiley & Sons Ltd., 2021) Chen, Luanmin; Xu, Juzhan; Wang, Chuan; Huang, Haibin; Huang, Hui; Hu, Ruizhen; Zhang, Fang-Lue and Eisemann, Elmar and Singh, KaranIn this paper, we study the problem of 3D shape upright orientation estimation from the perspective of reinforcement learning, i.e. we teach a machine (agent) to orientate 3D shapes step by step to upright given its current observation. Unlike previous methods, we take this problem as a sequential decision-making process instead of a strong supervised learning problem. To achieve this, we propose UprightRL, a deep network architecture designed for upright orientation estimation. UprightRL mainly consists of two submodules: an Actor module and a Critic module which can be learned with a reinforcement learning manner. Specifically, the Actor module selects an action from the action space to perform a point cloud transformation and obtain the new point cloud for the next environment state, while the Critic module evaluates the strategy and guides the Actor to choose the next stage action. Moreover, we design a reward function that encourages the agent to select action which is conducive to orient model towards upright orientation with a positive reward and negative otherwise. We conducted extensive experiments to demonstrate the effectiveness of the proposed model, and experimental results show that our network outperforms the stateof- the-art. We also apply our method to the robot grasping-and-placing experiment, to reveal the practicability of our method.