OpenAI Five is the name of a machine learning project that performs as a team of video game bots playing against human players in the competitive five-on-five video game Dota 2. The system was developed by OpenAI, an American artificial intelligence (AI) research and development company founded with the mission to develop safe AI in a way that benefits humanity. OpenAI Five's first public appearance occurred in 2017, where it was demonstrated in a live one-on-one game against a professional player of the game known as Dendi, who lost to it. The following year, the system had advanced to the point of performing as a full team of five, and began playing against and showing the capability to defeat professional teams.
The company uses Dota 2 as an experiment for general-purpose applied machine learning to capture the unpredictability and continuous nature of the real world. The team stated that the complex nature of the game and its strong reliance on having to work together as a team to win was a major reason it was specifically chosen. The algorithms used for the project have also been applied to other systems, such as controlling a robotic hand. The project has also been compared to a number of other similar cases of AI playing against and defeating humans, such as Watson on the television game show Jeopardy!, Deep Blue in chess, and AlphaGo in the board game Go.
Development on the algorithms used for the bots began in November 2016. OpenAI decided to use Dota 2, a competitive five-on-five video game, as a base due to it being popular on the live streaming platform Twitch, having native support for Linux, and had an application programming interface (API) available. Before becoming a team of five, the first public demonstration occurred at The International 2017 in August, the annual premiere championship tournament for the game, where Dendi, a professional Ukrainian player of the game, lost against an OpenAI bot in a live one-on-one matchup. After the match, CTO Greg Brockman explained that the bot had learned by playing against itself for two weeks of real time, and that the learning software was a step in the direction of creating software that can handle complex tasks "like being a surgeon". OpenAI used a methodology called "reinforcement learning", as the bots learn over time by playing against itself hundreds of times a day for months, in which they are rewarded for actions such as killing an enemy and destroying towers.
By June 2018, the ability of the bots expanded to play together as a full team of five and were able to defeat teams of amateur and semi-professional players. At The International 2018, OpenAI Five played in two games against professional teams, one against the Brazilian-based paiN Gaming and the other against an all-star team of former Chinese players. Although the bots lost both matches, OpenAI still considered it a successful venture, stating that playing against some of the best players in Dota 2 allowed them to analyze and adjust their algorithms for future games. The bots' final public demonstration occurred in April 2019, where they won a best-of-three series against The International 2018 champions OG at a live event in San Francisco. A four-day online event to play against the bots, open to the public, occurred the same month. There, the bots played in 42,729 public games, winning all but 4,075 of them.
Each OpenAI Five network contains a single layer with a 4096-unit LSTM that observes the current game state extracted from the Dota developer's API. The neural network conducts actions via numerous possible action heads (no human data involved), and every head has meaning. For instance, the number of ticks to delay an action, what action to select – the X or Y coordinate of this action in a grid around the unit. In addition, action heads are computed independently. The AI system observes the world as a list of 20,000 numbers and takes an action by conducting a list of eight enumeration values. Also, it selects different actions and targets to understand how to encode every action and observe the world.
OpenAI Five has been developed as a general-purpose reinforcement learning training system on the "Rapid" infrastructure. Rapid consists of two layers: it spins up thousands of machines and helps them ‘talk’ to each other and a second layer runs software. By 2018, OpenAI Five had played around 180 years worth of games in reinforcement learning running on 256 GPUs and 128,000 CPU cores, using a policy gradient method dubbed "Proximal Policy Optimization."
|OpenAI 1v1 bot (2017)||OpenAI Five (2018)|
|CPUs||60,000 CPU cores on Microsoft Azure||128,000 pre-emptible CPU cores on the Google Cloud Platform (GCP)|
|GPUs||256 K80 GPUs on Azure||256 P100 GPUs on the GCP|
|Experience collected||~300 years per day||~180 years per day|
|Size of observation||~3.3kB||~36.8kB|
|Observations per second of gameplay||10||7.5|
|Batch size||8,388,608 observations||1,048,576 observations|
|Batches per minute||~20||~60|
Prior to OpenAI Five, other AI versus human experiments and systems have been successfully used before, such as Jeopardy! with Watson, chess with Deep Blue, and Go with AlphaGo. In comparison with other games that have used AI systems to play against human players, Dota 2 differs as explained below:
Long run view: The bots run at 30 frames per second for an average match time of 45 minutes, which results in 80,000 ticks per game. OpenAI Five observes every fourth frame, generating 20,000 moves. By comparison, chess usually ends before 40 moves, while Go ends before 150 moves.
Partially observed state of the game: Players and their allies can only see the map directly around them. The rest of it is covered in a fog of war which hides enemies units and their movements. Thus, playing Dota 2 requires making inferences based on this incomplete data, as well as predicting what their opponent could be doing at the same time. By comparison, Chess and Go are "full-information games", as they do not hide elements from the opposing player.
Continuous action space: Each playable character in a Dota 2 game, known as a hero, can take dozens of actions that target either another unit or a position. The OpenAI Five developers allow the space into 170,000 possible actions per hero. Without counting the perpetual aspects of the game, there are an average of ~1,000 valid actions each tick. By comparison, the average number of actions in chess is 35 and 250 in Go.
Continuous observation space: Dota 2 is played on a large map with ten heroes, five on each team, along with dozens of buildings and non-player character (NPC) units. The OpenAI system observes the state of a game through developers’ bot API, as 20,000 numbers that constitute all information a human is allowed to get access to. A chess board is represented as about 70 lists, whereas a Go board has about 400 enumerations.
OpenAI Five have received acknowledgement from the AI, tech, and video game community at large. Microsoft founder Bill Gates called it a "big deal", as their victories "required teamwork and collaboration". Chess player Garry Kasparov, who lost against the Deep Blue AI in 1997, stated that despite their losing performance at The International 2018, the bots would eventually "get there, and sooner than expected".
In a conversation with MIT Technology Review, AI experts also considered OpenAI Five system as a significant achievement, as they noted that Dota 2 was an "extremely complicated game", so even beating non-professional players was impressive. PC Gamer wrote that their wins against professional players was a significant event in machine learning. In contrast, Motherboard wrote that the victory was "basically cheating" due to the simplified hero pools on both sides, as well as the fact that bots were given direct access to the API, as opposed to using computer vision to interpret pixels on the screen. The Verge wrote that the bots were evidence that the company's approach to reinforcement learning and its general philosophy about AI was "yielding milestones".
In 2019, DeepMind unveiled a similar bot for Starcraft II, AlphaStar. Like OpenAI Five, AlphaStar used reinforcement learning and self-play. The Verge reported that "the goal with this type of AI research is not just to crush humans in various games just to prove it can be done. Instead, it’s to prove that — with enough time, effort, and resources — sophisticated AI software can best humans at virtually any competitive cognitive challenge, be it a board game or a modern video game." They added that the DeepMind and OpenAI victories were also a testament to the power of certain uses of reinforcement learning.