Détail du document
Identifiant

oai:arXiv.org:2411.00965

Sujet
Computer Science - Robotics
Auteur
Hsu, Cheng-Chun Wen, Bowen Xu, Jie Narang, Yashraj Wang, Xiaolong Zhu, Yuke Biswas, Joydeep Birchfield, Stan
Catégorie

Computer Science

Année

2024

Date de référencement

06/11/2024

Mots clés
task demonstrations object-centric object pose
Métrique

Résumé

We introduce SPOT, an object-centric imitation learning framework.

The key idea is to capture each task by an object-centric representation, specifically the SE(3) object pose trajectory relative to the target.

This approach decouples embodiment actions from sensory inputs, facilitating learning from various demonstration types, including both action-based and action-less human hand demonstrations, as well as cross-embodiment generalization.

Additionally, object pose trajectories inherently capture planning constraints from demonstrations without the need for manually crafted rules.

To guide the robot in executing the task, the object trajectory is used to condition a diffusion policy.

We show improvement compared to prior work on RLBench simulated tasks.

In real-world evaluation, using only eight demonstrations shot on an iPhone, our approach completed all tasks while fully complying with task constraints.

Project page: https://nvlabs.github.io/object_centric_diffusion

Hsu, Cheng-Chun,Wen, Bowen,Xu, Jie,Narang, Yashraj,Wang, Xiaolong,Zhu, Yuke,Biswas, Joydeep,Birchfield, Stan, 2024, SPOT: SE(3) Pose Trajectory Diffusion for Object-Centric Manipulation

Document

Ouvrir

Partager

Source

Articles recommandés par ES/IODE IA