RLRL: Rocket League Reinforcement Learning

RLRL: Rocket League Reinforcement Learning
Photo by Glenn Carstens-Peters / Unsplash

Rocket Leauge is a game where you play football with cars. It is quite a challenge for human players to become good at the game and requires thousands of hours for the highest-level pros who compete in tournaments. Therefore for an AI such a real-time game is a difficult challenge.

But because of these properties, it is an ideal arena to try Reinforcement Learning (RL). The most popular tool to do as such in the Rocket League AI community is RLGym which is great because it's open-source! It's a framework where you can create, train, and evaluate RL agents for Rocket League.

What is RLGym?

RLGym is a Python-based library specifically designed to facilitate reinforcement learning for Rocket League. It provides a framework that integrates with Rocket League, allowing developers to train AI agents to play the game. RLGym leverages the powerful simulation capabilities of Rocket League to offer a highly customizable environment where RL agents can learn from scratch or refine existing strategies. It's really good because it seamlessly integrates with Rocket League, has support so lots of popular RL libraries like Stable Baslines3, PyTorch and TensorFlow and has pre-built scenarios and models which means you don't have to start completely from scratch and improves the learning curve.

Existing Models in RLGym

So RLGym provides quite a few existing models and baseline agents that you can use to get started and develop on these to build even more sophisticated agents or testing new RL algorithms. Here are some of the notable models:

  • BasicBot: Very simple, rule-based bot to serve as a benchmark (doesn’t use RL).
  • Nexto: A really advanced bot that uses deep reinforcement learning techniques. It's been trained using Proximal Policy Optimization (PPO) and it's one of the more sophisticated models available in RLGym. There are more advanced versions which are even better but due to use of the bot for cheating in the game the even better versions (up to SSL, supersonic legend, level) aren't openly available.
  • RLGym’s Training Models: The library also offers access to several pre-trained models that can be fine-tuned or used as benchmarks. These models are generally trained using state-of-the-art RL algorithms like Deep Q-Learning (DQN), PPO, and Soft Actor-Critic (SAC).

Training an RL Agent with RLGym

Training an RL agent using RLGym involves a few steps. But as a quick overview of how it works: you set up the environment with the necessary libraries like PyTorch, TensorFlow and Stable Baselines3, you then define the observation space (what the agent can see) and the action space (what the agent can do) and RLGym allows you to modify these spaces. You then create a reward function (what the agent is learning to optimise)) and choose an RL algorithm like PPo, DQN or SAC (can use these with Stable Baselines3). Afterwards, you train the agent through RLGym which lets the agent interact with the Rocket League environment and post-training you can evaluate the agent's performance against existing bots like BasicBot or Nexto. Those evaluation results can then be used to fine-tune the agent by adjusting hyperparameters, reward functions or observation spaces as needed.

Recent Research and Developments Using RLGym

Researchers have been leveraging RLGym to push the boundaries of what RL agents can achieve in Rocket League. Some notable research papers and projects have explored advanced techniques such as:

Meta-Learning for Adaptability

Training agents that can adapt to different play styles and strategies on the fly, improving their generalization capabilities.

Multi-Agent RL

Developing RL agents that can cooperate and compete with other agents e.g. with 2v2 and 3v3 games which makes them way more robust and versatile in those environments.

Read more