oai:arXiv.org:2405.18695
Computer Science
2024
6/19/2024
There are several challenges in developing a model for multi-tasking humanoid control.
Reinforcement learning and imitation learning approaches are quite popular in this domain.
However, there is a trade-off between the two.
Reinforcement learning is not the best option for training a humanoid to perform multiple behaviors due to training time and model size, and imitation learning using kinematics data alone is not appropriate to realize the actual physics of the motion.
Training models to perform multiple complex tasks take long training time due to high DoF and complexities of the movements.
Although training models offline would be beneficial, another issue is the size of the dataset, usually being quite large to encapsulate multiple movements.
There are few implementations of transformer-based models to control humanoid characters and predict their motion based on a large dataset of recorded/reference motion.
In this paper, we train a GPT on a large dataset of noisy expert policy rollout observations from a humanoid motion dataset as a pre-trained model and fine tune that model on a smaller dataset of noisy expert policy rollout observations and actions to autoregressively generate physically plausible motion trajectories.
We show that it is possible to train a GPT-based foundation model on a smaller dataset in shorter training time to control a humanoid in a realistic physics environment to perform human-like movements.
Padmanabhan, Siddharth,Miyazawa, Kazuki,Horii, Takato,Nagai, Takayuki, 2024, Data-Efficient Approach to Humanoid Control via Fine-Tuning a Pre-Trained GPT on Action Data