Before now, I was storing each level in a two-dimensional array. This worked, but it had a problem. Once the level was built, there was no easy way to add squares to the top and left, and I eventually would have needed to solve that to implement Bit-Men and other programs that might add squares to the grid. I got around that by preemptively adding hundreds of empty squares to the edge of the array. Needless to say, this wasn’t ideal. It inflated the array, and the size of the grid was still limited.
I’ve now switched to using a dictionary, with each square indexed by its position. This means I can use negative numbers, which allows me to expand the grid indefinitely without adding superfluous elements. It’s a much better solution.
Also, if you’re reading this blog (and not a lot of people read this blog) then you might be wondering why my posts have slowed down recently. I think it’s because, near the beginning, there were a lot of big but relatively short tasks I could do, and progress was very visible. Now that I’ve gotten most of the low-hanging fruit, I’m left with tasks with a lower effort:result ratio, so each one takes longer. The law of diminishing returns.
I’ve been rewriting the enemy AI to try to make it more intelligent. My first thought was to take a holistic approach and have a brain that considers where all the player programs are, and decides how it can use all its programs to cause the most damage. However, when writing it, I discovered it was more difficult than I expected.
It’s easy enough to see what each program can do individually given the state of the grid, but of course the grid changes whenever a program moves. The script might determine that two of its programs could both attack one big player program to kill it, but after moving one of them, its trail of sectors gets in the way of the other program’s route, or when it attacks it removes the sector that the other program would have attacked.
I considered brute forcing it and iterating through all possible moves to find the best one, but because of the power of exponential increase, there could have been tens of millions of possible moves, which would have taken a long time to get through. Imagine playing the game, ending your turn, then waiting for 20 minutes on a still grid while the enemy works out what to do. I decided the holistic approach is more trouble than it’s worth, and instead did something more similar to the old AI, moving programs one by one.
It’s not a complete loss. The new AI considers the order in which it moves programs, and moves the ones with only one reachable player program first. This gets the easy decisions out of the way and narrows down the choice for the programs with more targets. The old AI moved programs in an arbitrary order. The new one also moves programs more if they can’t get to any player programs because of islands or barricades.