oai:arXiv.org:2410.18495
Computer Science
2024
3/5/2025
This paper tackles the challenging task of maintaining formation among multiple unmanned aerial vehicles (UAVs) while avoiding both static and dynamic obstacles during directed flight.
The complexity of the task arises from its multi-objective nature, the large exploration space, and the sim-to-real gap.
To address these challenges, we propose a two-stage reinforcement learning (RL) pipeline.
In the first stage, we randomly search for a reward function that balances key objectives: directed flight, obstacle avoidance, formation maintenance, and zero-shot policy deployment.
The second stage applies this reward function to more complex scenarios and utilizes curriculum learning to accelerate policy training.
Additionally, we incorporate an attention-based observation encoder to improve formation maintenance and adaptability to varying obstacle densities.
Experimental results in both simulation and real-world environments demonstrate that our method outperforms both planning-based and RL-based baselines in terms of collision-free rates and formation maintenance across static, dynamic, and mixed obstacle scenarios.
Ablation studies further confirm the effectiveness of our curriculum learning strategy and attention-based encoder.
Animated demonstrations are available at: https://sites.google.com/view/ uav-formation-with-avoidance/.
Xie, Yuqing,Yu, Chao,Zang, Hongzhi,Gao, Feng,Tang, Wenhao,Huang, Jingyi,Chen, Jiayu,Xu, Botian,Wu, Yi,Wang, Yu, 2024, Multi-UAV Formation Control with Static and Dynamic Obstacle Avoidance via Reinforcement Learning