Balancing the human artificial intelligence has been one of the harder parts of development. The difficulty stems from trying to make the AI do things that seem “logical” to the player. For example, consider the two possible desirable actions “harvest food” and “eat food”. We need to determine which action takes priority. If someone is hungry, the logical action would be to see if there’s any food around and eat it. OK, simple. We set the desire to eat to be higher than the desire to harvest food.

20150215 FoodDisplay

What if there isn’t anything to eat lying around? Since the desire to eat is higher, they will continually try to eat and never try to harvest food first. The player thinks, “What a stupid AI”. The logical action would be to harvest food first, and then eat the food. OK, then let’s switch it around so we set the person’s desire to harvest food to be higher than the desire to eat food.

20150215 TreeDisplay

Now what if the player creates 50 “harvest food” tasks on 50 different plants? Remember, the tribe member’s desire to harvest food is set higher than their desire to eat food. The AI thought process will be: hungry -> harvest food -> harvest food -> … -> harvest food -> STARVING -> harvest food -> harvest food -> starved to death. They will continue harvesting until they either finish all the harvest tasks or starve to death, despite being surrounded by plenty of recently harvested food. Once again the player thinks, “What a stupid AI”.

One way to work around this is having a variable desire to eat that fluctuates depending on current conditions. Firstly, add a check for food’s existence before trying to eat food. If there is nothing to eat, then set the desire to eat to be zero, no matter how hungry the person. That way the person will carry out any harvest tasks first. Secondly, if something to eat exists, make the person’s desire to eat higher the hungrier they become. With these extra rules, let’s see how the AI behaves in the above scenario with 50 harvest tasks.

20150215 FoodEverywhere

The AI thought process will be: hungry -> look for food -> find no food -> harvest food -> harvest food -> … -> harvest food -> STARVING -> look for food -> find food -> eat food. This thought process requires more calculations, but it makes the tribe member appear smarter to the player.

Now add in desires for drinking, sleeping, staying warm, fighting, fleeing, finding missing equipment, crafting and building to this equation. It’s pretty easy to end up with an AI that doesn’t always do the “logical” thing if the desires aren’t carefully balanced. Even worse is an AI whose desire fluctuates every few milliseconds, causing it to flip-flop between two different actions but not making any progress on either one. If you’ve ever seen an AI in a game move their units backward and forward over and over again, you’ll know what I mean.

20150123Bug

I’m constantly testing and tweaking the human AI in Tribe Of Pok to balance desires (and stop them burning themselves, like in the above gif). It can be time-consuming, but I think it’s worth the effort. It’s great to see your Pokians on screen running around acting the way you would expect.