detalle del documento
IDENTIFICACIÓN

oai:arXiv.org:2411.00965

Tema
Computer Science - Robotics
Autor
Hsu, Cheng-Chun Wen, Bowen Xu, Jie Narang, Yashraj Wang, Xiaolong Zhu, Yuke Biswas, Joydeep Birchfield, Stan
Categoría

Computer Science

Año

2024

fecha de cotización

6/11/2024

Palabras clave
task demonstrations object-centric object pose
Métrico

Resumen

We introduce SPOT, an object-centric imitation learning framework.

The key idea is to capture each task by an object-centric representation, specifically the SE(3) object pose trajectory relative to the target.

This approach decouples embodiment actions from sensory inputs, facilitating learning from various demonstration types, including both action-based and action-less human hand demonstrations, as well as cross-embodiment generalization.

Additionally, object pose trajectories inherently capture planning constraints from demonstrations without the need for manually crafted rules.

To guide the robot in executing the task, the object trajectory is used to condition a diffusion policy.

We show improvement compared to prior work on RLBench simulated tasks.

In real-world evaluation, using only eight demonstrations shot on an iPhone, our approach completed all tasks while fully complying with task constraints.

Project page: https://nvlabs.github.io/object_centric_diffusion

Hsu, Cheng-Chun,Wen, Bowen,Xu, Jie,Narang, Yashraj,Wang, Xiaolong,Zhu, Yuke,Biswas, Joydeep,Birchfield, Stan, 2024, SPOT: SE(3) Pose Trajectory Diffusion for Object-Centric Manipulation

Documento

Abrir

Compartir

Fuente

Artículos recomendados por ES/IODE IA