Capturing the Motion of Every Joint: 3D Human Pose and Shape Estimation with Independent Tokens


Sen Yang1
Wen Heng2
Gang Liu2
Guozhong Luo2
Wankou Yang1
Gang Yu2

Southeast University1
Tencent PCG2

[Paper]
[Video]
[Code]



In this paper we present a novel method to estimate 3D human pose and shape from monocular videos. This task requires directly recovering pixel-alignment 3D human pose and body shape from monocular images or videos, which is challenging due to its inherent ambiguity. To improve precision, existing methods highly rely on the initialized mean pose and shape as prior estimates and parameter regression with an iterative error feedback manner. In addition, video-based approaches model the overall change over the image-level features to temporally enhance the single-frame feature, but fail to capture the rotational motion at the joint level, and cannot guarantee local temporal consistency. To address these issues, we propose a novel Transformer-based model with a design of independent tokens. First, we introduce three types of tokens independent of the image feature: joint rotation tokens, shape token, and camera token. By progressively interacting with image features through Transformer layers, these tokens learn to encode the prior knowledge of human 3D joint rotations, body shape, and position information from large-scale data, and are updated to estimate SMPL parameters conditioned on a given image. Second, benefiting from the proposed token-based representation, we further use a temporal model to focus on capturing the rotational temporal information of each joint, which is empirically conducive to preventing large jitters in local parts. Despite being conceptually simple, the proposed method attains superior performances on the 3DPW and Human3.6M datasets. Using ResNet-50 and Transformer architectures, it obtains 42.0 mm error on the PA-MPJPE metric of the challenging 3DPW, outperforming state-of-the-art counterparts by a large margin.




*Based on the same detection and tracking framework provided by VIBE demo


*Based on the same detection and tracking framework provided by VIBE demo

Single-person Human Mesh Reconstruction



Multi-person Human Mesh Reconstruction





Acknowledgements

This work was supported by the National Natural Science Foundation of China under Nos. 62276061, 61773117 and 62006041. This webpage template was borrowed from colorful folks.