Evolving Agents with NEAT

Neuroevolution · Pong + Smash-style experiments

Acknowledgment

Thanks to contributors and collaborators:

NEAT evolves network structure and weights via genetic algorithms, starting from minimal networks and progressively complexifying them. Speciation helps preserve innovation by protecting new structures.

Objective (intuition)

A common framing for policy learning is likelihood maximization:

$$\max_\theta \; \mathbb{E}_{x \sim P}\left[ \log p_\theta(x) \right]$$

In NEAT, we search over network parameters and topology using evolutionary operators (selection, crossover, mutation) rather than gradient descent.

Pong experiment

Collaboration with Jet Chiang — see his post. Built with neat-python.

Demo of a self-play Pong agent trained after ~5 minutes.

How NEAT works

Speciation, crossover, and mutation are core components. A standard compatibility distance used for speciation is:

$$\delta = c_1 \cdot E + c_2 \cdot D + c_3 \cdot W$$

where $$E$$ is excess genes, $$D$$ is disjoint genes, and $$W$$ is average weight difference for matching genes.

Smash-style evolution

We also explored combining NEAT variants with sequence models for a Smash-style environment on UTMIST’s AI^2 platform.