Coders Strike Back

Introduction

If you’re not already familiar with the website CodinGame, I highly recommend you check it out. Its concept is pretty simple, and could be summed up as two ideas:

The purpose of this document is to give a step by step walkthrough of how I reached Legend League in the problem Coders Strike Back.
My source code (C++) is available as well, in my GitHub repository. Feel free to check out how I actually implemented things, should my explanations not be enough. However, I would advice against submitting my code as your own – it would spoil the fun of actually understanding/solving the problem.


Problem overview

Coders Strike Back is about a pod race, similar to the one in Star Wars: Episode I.

screenshot


Up to Bronze

Each of those steps is relatively simple, and aims at making us familiar with the problem.

First question

First boss

Wood 2

Wood 1


Bronze to Gold

This solution consists of several features, which individually weren’t enough to break out of Bronze League. Their combination, however, got directly promoted to Gold.

CheckpointManager (when best to use BOOST)

The idea:

Implementation:

Thrust management/angle

The pseudo-code from earlier makes sense for a small angle (< ε), and one above 90°.

However, the transition from ε to 90° feels primitive, so I tried to improve it:

Rotation

There is no information on how the pod turns at this point (is there a turnrate or something?).

It is rather frustrating, as we would often like the pod to steer more effectively (turn faster, in advance) to avoid drifting.

I tried to achieve that by adding an offset to the target of the pod: -h*podSpeed


Gold (Heuristic approach)

Input changes

More features

Collisions/Shield

Racer/Interceptor behaviour

Adding this behaviour allowed me to go from rank 1258 to 482 (out of 4386 in Gold league at the time), but wasn’t enough to beat the boss.


Gold to Legend (Simulation approach)

At this point, I was unsure how much I could still improve my solution, with the same heuristic approach.

However, the “Expert rules” we are given describe precisely how the pods move:

On each turn the pods movements are computed this way:

  • Rotation: the pod rotates to face the target point, with a maximum of 18 degrees (except for the 1rst round).
  • Acceleration: the pod’s facing vector is multiplied by the given thrust value. The result is added to the current speed vector.
  • Movement: The speed vector is added to the position of the pod. If a collision would occur at this point, the pods rebound off each other.
  • Friction: the current speed vector of each pod is multiplied by 0.85
  • The speed’s values are truncated and the position’s values are rounded to the nearest integer.

Collisions are elastic. The minimum impulse of a collision is 120.

A boost is in fact an acceleration of 650. The number of boost available is common between pods. If no boost is available, the maximum thrust is used.

A shield multiplies the Pod mass by 10. The provided angle is absolute. 0° means facing EAST while 90° means facing SOUTH.

What we get from that is twofold:

Building the simulation

Outside of the collisions, implementing those rules in order to update the values of the pods, by simulating a turn, isn’t particularly hard, although it requires some debug/fine-tuning, and my implemention seemed to be perfectly accurate.

Collisions, however, are the tricky part, as they are described in a rather obscure way:

the pods rebound off each other.

Collisions are elastic. The minimum impulse of a collision is 120.

I’m still not sure how accurate that prediction is (compared to how the collisions are actually computed) but it looked good enough.

Comparing solutions

Now that we can simulate a solution (a sequence of moves for both pods), how can we tell how good it actually is?

We can build a fitness function to quantify that, following a similar reasoning to what we did in the heuristic approach. We just need to rate the state of the pods, once we are done simulating a solution.

Building a solution

We do have 75ms to execute costly computations every turn, but that still won’t be enough to build a good solution through brute force. We need a smarter way to explore the solution space.
A way to proceed, which was teased in the tags of the problem description, is genetic algorithms. That’s the approach I chose.

Genetic evolution

Mutations

When a mutation occurs, we simply select one move to modify, somewhere in the solution (ie we select a turn and a pod).
We modify exactly one of the following values in that move:

Which of the values to modify is also selected at random, the rotation and thrust having more weight.

I realized managing the boost was a waste of resource, as it can only happen once during the entire race. I fixed that by setting it on/off arbitrarily the first turn, and never worrying about it again. We could instead try to manage it in a clever way, but I didn’t.

Result

You have been promoted to Legend League!

This implementation got promoted to Legend league, where it got the rank of #495 (out of 725).


How to go further

My goal being to simply reach Legend league, I stopped there. However, my solution is far from perfect, and there are a couple ways to improve it further.

Simulation

From my testing, the simulation is 100% accurate when it comes to anything that isn’t collision/rebound.

Since the description of that last part is relatively obscure, and my predictions looked good enough, I didn’t bother looking into it further, but maybe there’s something to fix here.

Performance optimization

Simply making the code more efficient allows to test more solutions every turn. This can in turn lead to a better solution, without changing anything else.

I already replaced std::rand with this method described by Intel, in order to compute random numbers much faster, but profiling the code and optimizing intensive parts will probably lead to better results.

Fitness function

It seems pretty good already, but it might be possible to consider additional things I didn’t think of.

Genetic evolution

This implementation is pretty basic:

Focusing on these points could improve our results.

Not ignoring the opponent’s input

I completely ignored the moves of the opponent. The simulation just assumes his pods will never rotate/thrust/shield/boost during the following turns.

Adapting the Simulation class, to manage the moves from the opponent as well, shouldn’t be too much trouble.
We could then make our population evolve against a solution attributed to the opponent.
That solution could just be generated via an if-else heuristic like what we did at first. We could also perform a genetic evolution, from the opponent’s point of view, during a smaller fraction of the turn.

Magic numbers

There are a lot of magic numbers everywhere (how many turns do we simulate, how big is our population of solutions, bias for many values…). Finding and testing a better configuration might lead to better results.


Closing thoughts