Human motion data for training humanoids.
Custom demonstration data programs for teams training humanoids to cook, clean, fold laundry, and work warehouse floors. Tell us the target tasks, robot body, and export format — we scope the capture pipeline around your training run.
From human demonstration to humanoid policy
Five steps we own end-to-end. You get training-ready data — not raw files your team spends a quarter cleaning.
Human demonstrates
Real performers execute the target skill in our capture studio — locomotion, dexterous tasks, coordinated movement.
Multi-modal capture
Optical mocap + IMU + force/torque + depth. Synchronous streams, studio-calibrated per program.
Retarget to your robot
Kinematic retargeting onto your embodiment (URDF/USD). Joint limits, contacts, and timing preserved.
Policy training
SDK-ready exports for behavioral cloning, DAgger, inverse RL, or diffusion-policy pipelines.
Deploy to hardware
Sim-validated trajectories ready for sim-to-real transfer onto your humanoid platform.
Built for embodied AI pipelines

Lab-grade precision
Optical motion capture, infrared tracking, and depth sensors — not estimated from monocular video. Hardware and protocol tuned per program to the precision your pipeline actually needs.

Multi-modal coverage
Egocentric video, MOCAP, hand pose, IMU, force/torque, depth maps, and task annotations — whatever your pipeline needs.

Scene-aware capture
Every object tracked, environment scanned, and reconstructed in 3D. Context-rich data for world models and spatial reasoning.

Scoped for production
We design each capture program against the references your team already uses (EgoDex, OpenEgo, EPIC-KITCHENS) and deliver SDK-ready exports for your training loop.
From research lab to factory floor
Household tasks
Cooking, cleaning, laundry, tidying, dishwashing — the everyday home chores your humanoid will actually ship.
Factory & warehouse work
Assembly, packaging, picking, inspection — human operators demonstrating industrial tasks in real environments.
Dexterous manipulation
Grasping, tool use, fine motor tasks — high-precision hand + finger tracking your manipulation policy needs.
Whole-body control
Locomotion, balance, and full-body coordination captured as policy-ready trajectories.
Imitation learning
Demonstration data formatted for behavioral cloning, DAgger, diffusion policy, and inverse RL pipelines.
Sim-to-real transfer
Ground-truth trajectories for validating simulator fidelity and closing the sim-to-real gap.
Whatever your pipeline needs
Scoped per program. Mix modalities; tell us your training target, we'll ship the combination.
Motion
Hand & Face
Physics & Exports
Dialed to your task
General
Training-ready
Locomotion, navigation, general manipulation, imitation learning from diverse scenes.
Studio-grade
Per program
Fine manipulation and dexterous tasks — hardware + protocol scoped to the precision your policy actually needs.
Public references we can align with
These public references guide format planning. Tbrain capture programs are scoped and produced per customer engagement for your robot, task set, and training pipeline.
Let's build the dataset
your model needs.
Tell us your training target. We'll scope a program in 48 hours and ship first samples in 2 weeks.
Tbrain · Data Factory · Hanoi
