Physical AI & Robotics

Human motion data for training humanoids.

Custom demonstration data programs for teams training humanoids to cook, clean, fold laundry, and work warehouse floors. Tell us the target tasks, robot body, and export format — we scope the capture pipeline around your training run.

Optical MOCAPMulti-modalScene-awareSim-ready exports
Scroll
Motion captureHumanoid controlSim-to-realAgent benchmarksTool-use tracesVideo annotationInstance segmentationSemantic masksRLHF preferenceRed-team evalsActive learningAI-native QCMotion captureHumanoid controlSim-to-realAgent benchmarksTool-use tracesVideo annotationInstance segmentationSemantic masksRLHF preferenceRed-team evalsActive learningAI-native QC
Motion captureHumanoid controlSim-to-realAgent benchmarksTool-use tracesVideo annotationInstance segmentationSemantic masksRLHF preferenceRed-team evalsActive learningAI-native QCMotion captureHumanoid controlSim-to-realAgent benchmarksTool-use tracesVideo annotationInstance segmentationSemantic masksRLHF preferenceRed-team evalsActive learningAI-native QC
The pipeline

From human demonstration to humanoid policy

Five steps we own end-to-end. You get training-ready data — not raw files your team spends a quarter cleaning.

01

Human demonstrates

Real performers execute the target skill in our capture studio — locomotion, dexterous tasks, coordinated movement.

02

Multi-modal capture

Optical mocap + IMU + force/torque + depth. Synchronous streams, studio-calibrated per program.

03

Retarget to your robot

Kinematic retargeting onto your embodiment (URDF/USD). Joint limits, contacts, and timing preserved.

04

Policy training

SDK-ready exports for behavioral cloning, DAgger, inverse RL, or diffusion-policy pipelines.

05

Deploy to hardware

Sim-validated trajectories ready for sim-to-real transfer onto your humanoid platform.

Why Tbrain

Built for embodied AI pipelines

Lab-grade precision

Optical motion capture, infrared tracking, and depth sensors — not estimated from monocular video. Hardware and protocol tuned per program to the precision your pipeline actually needs.

Multi-modal coverage

Egocentric video, MOCAP, hand pose, IMU, force/torque, depth maps, and task annotations — whatever your pipeline needs.

Scene-aware capture

Every object tracked, environment scanned, and reconstructed in 3D. Context-rich data for world models and spatial reasoning.

Scoped for production

We design each capture program against the references your team already uses (EgoDex, OpenEgo, EPIC-KITCHENS) and deliver SDK-ready exports for your training loop.

Use cases

From research lab to factory floor

Household tasks

Cooking, cleaning, laundry, tidying, dishwashing — the everyday home chores your humanoid will actually ship.

Factory & warehouse work

Assembly, packaging, picking, inspection — human operators demonstrating industrial tasks in real environments.

Dexterous manipulation

Grasping, tool use, fine motor tasks — high-precision hand + finger tracking your manipulation policy needs.

Whole-body control

Locomotion, balance, and full-body coordination captured as policy-ready trajectories.

Imitation learning

Demonstration data formatted for behavioral cloning, DAgger, diffusion policy, and inverse RL pipelines.

Sim-to-real transfer

Ground-truth trajectories for validating simulator fidelity and closing the sim-to-real gap.

Data modalities

Whatever your pipeline needs

Scoped per program. Mix modalities; tell us your training target, we'll ship the combination.

Motion

Egocentric RGB video
Stereo / multi-view video
Optical motion capture (MOCAP)
Full-body skeletal tracking

Hand & Face

3D hand pose tracking
Finger articulation
Facial rig (optional)
Gaze vector

Physics & Exports

IMU / inertial measurement
Force & torque sensing
Depth (LiDAR / structured light)
Object 6DoF pose
3D scene reconstruction
Task & action annotations
Gripper state & joint angles
Sim-ready exports (URDF/USD)
Accuracy tiers

Dialed to your task

General

Training-ready

Locomotion, navigation, general manipulation, imitation learning from diverse scenes.

Recommended

Studio-grade

Per program

Fine manipulation and dexterous tasks — hardware + protocol scoped to the precision your policy actually needs.

Reference datasets

Public references we can align with

These public references guide format planning. Tbrain capture programs are scoped and produced per customer engagement for your robot, task set, and training pipeline.

DatasetVolumeQualityUse case
EgoDex829 hoursHand + egocentricDexterous manipulation
OpenEgo1,107 hoursUnified hand formatDiverse egocentric tasks
EPIC-KITCHENS100 hoursAction labelsHousehold activities
UMI Community1,400 hoursSLAM / 6DoFGripper manipulation

Let's build the dataset your model needs.

Tell us your training target. We'll scope a program in 48 hours and ship first samples in 2 weeks.

Tbrain · Data Factory · Hanoi