So I made this 3D pong game you can't even play

In a previous hackathon I participated in, there was a project that prototyped the use of machine learning to “evolve” the capability of in-game AI agents by using user provided input through a Virtual Reality game. I was pretty blown away by the idea of evolution through repetitive training and exposure to user-provided input. I’ve seen a couple of videos where developers created simulations with Unity and thought that it might be cool to give it a try myself.

Fun with ML-Agents

Unity has a ton of cool plugins and frameworks to play around with, and I’ve been developing casually with Unity for just about a year now so I thought it would be create my very own agent that would learn to beat me at my own game. I’m by no means a machine learning expert so I’m glad that Unity had provided ML-Agents as an easy way for developers to utilize some serious machine learning algorithms without having to do any of the heavy lifting.

Unity ML-Agents Toolkit has many great examples you can launch and play around with to get a feel of the basic functionality and code. Looking at all the examples made me want to try writing my own toy example and I decided that two players playing a game of Pong would be pretty cool; that would be my Hello World program with ML-Agents! But first, I needed to make the game playable and subsequently let the agent take control 😈.

Iteration 1: Just Pong

The first thing to build was a simple clone of the classic Pong game for the agent to play which was simple enough since it’s Pong and even simpler since I decided to make the agent play against a wall to begin with which I thought was a pretty reasonable sparring partner.

The next thing to determine would be what inputs to give the agent and what are the actionables for the agent. When playing Pong, to deflect the pong (the ball that bounces around), what would be useful would be the position of the paddle and the pong the velocity of the pong. With ML-Agents, it was as simple as calling a method to add those variables :simple_smile: and after some hand-waving-black-box-machine-learning which I don’t really understand, voila, it’s alive!

Initially, the paddle randomly moves about, hitting the pong sometimes but we see that the frequency of misses slowly decreases; this is illustrated by the decreasing number of red flashes, each indicating a reset in the simulations due to a miss.

Eventually, we get a not-so-terrible agent controlling a paddle, able to detect and deflect the pong.

Pong Agent Training

Iteration 2: The Next Dimension

Extending the 2D Pong to the next dimension was pretty simple since everything was implemented with 3D assets, all that was needed was to extend the side walls and include a roof on the playing field and of course extending the paddle in the new dimension as well.

After some tweaking, a new agent was trained for the new environment the agent worked just as well.

Pong3D Agent Training

Iteration 3: Multi-Non-Player

With an agent that can now play pong in a 3D environment, we simply have to duplicate that agent and rotate it 180 degrees to play against the original agent. However, I had to change the frame of references for the agents (because now they’re viewing things the same way but from different perspectives) and below is the final result!

Pong3D Arena

Resources

Unity ML-Agents Toolkit Documentation

Great talk by Danny Lange & Arthur Juliani on ML-Agents