MocapNET

MocapNET

FORTH-ModelBasedTracker

We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance

927 Stars
143 Forks
927 Watchers
C++ Language
other License
100 SrcLog Score
Cost to Build
$3.25M
Market Value
$16.32M

Growth over time

19 data points  ·  2021-05-01 → 2026-04-01
Stars Forks Watchers
💬

How do you feel about this project?

Ask AI about MocapNET

Question copied to clipboard

What is the FORTH-ModelBasedTracker/MocapNET GitHub project? Description: "We present MocapNET, a real-time method that estimates the 3D human pose directly in the popular Bio Vision Hierarchy (BVH) format, given estimations of the 2D body joints originating from monocular color images. Our contributions include: (a) A novel and compact 2D pose NSRM representation. (b) A human body orientation classifier and an ensemble of orientation-tuned neural networks that regress the 3D human pose by also allowing for the decomposition of the body to an upper and lower kinematic hierarchy. This permits the recovery of the human pose even in the case of significant occlusions. (c) An efficient Inverse Kinematics solver that refines the neural-network-based solution providing 3D human pose estimations that are consistent with the limb sizes of a target person (if known). All the above yield a 33% accuracy improvement on the Human 3.6 Million (H3.6M) dataset compared to the baseline method (MocapNET) while maintaining real-time performance". Written in C++. Explain what it does, its main use cases, key features, and who would benefit from using it.

Question is copied to clipboard — paste it after the AI opens.

How to clone MocapNET

Clone via HTTPS

git clone https://github.com/FORTH-ModelBasedTracker/MocapNET.git

Clone via SSH

[email protected]:FORTH-ModelBasedTracker/MocapNET.git

Download ZIP

Download master.zip

Found an issue?

Report bugs or request features on the MocapNET issue tracker:

Open GitHub Issues