Documentdetail
ID kaart

oai:arXiv.org:2410.18495

Onderwerp
Computer Science - Robotics
Auteur
Xie, Yuqing Yu, Chao Zang, Hongzhi Gao, Feng Tang, Wenhao Huang, Jingyi Chen, Jiayu Xu, Botian Wu, Yi Wang, Yu
Categorie

Computer Science

Jaar

2024

vermelding datum

05-03-2025

Trefwoorden
maintenance dynamic static obstacle formation
Metriek

Beschrijving

This paper tackles the challenging task of maintaining formation among multiple unmanned aerial vehicles (UAVs) while avoiding both static and dynamic obstacles during directed flight.

The complexity of the task arises from its multi-objective nature, the large exploration space, and the sim-to-real gap.

To address these challenges, we propose a two-stage reinforcement learning (RL) pipeline.

In the first stage, we randomly search for a reward function that balances key objectives: directed flight, obstacle avoidance, formation maintenance, and zero-shot policy deployment.

The second stage applies this reward function to more complex scenarios and utilizes curriculum learning to accelerate policy training.

Additionally, we incorporate an attention-based observation encoder to improve formation maintenance and adaptability to varying obstacle densities.

Experimental results in both simulation and real-world environments demonstrate that our method outperforms both planning-based and RL-based baselines in terms of collision-free rates and formation maintenance across static, dynamic, and mixed obstacle scenarios.

Ablation studies further confirm the effectiveness of our curriculum learning strategy and attention-based encoder.

Animated demonstrations are available at: https://sites.google.com/view/ uav-formation-with-avoidance/.

Xie, Yuqing,Yu, Chao,Zang, Hongzhi,Gao, Feng,Tang, Wenhao,Huang, Jingyi,Chen, Jiayu,Xu, Botian,Wu, Yi,Wang, Yu, 2024, Multi-UAV Formation Control with Static and Dynamic Obstacle Avoidance via Reinforcement Learning

Document

Openen

Delen

Bron

Artikelen aanbevolen door ES/IODE AI

Choice Between Partial Trajectories: Disentangling Goals from Beliefs
agents models aligned based bootstrapped learning reward function model return choice choices partial