Xenonauts A.I. - Knowledge Systems pt. 1
A traditional approach in A.I. is to search for an optimal strategy for the agent to follow. This is done by using traditional search techniques to search through the set of all possible actions (and consequences) an agent can make. Alpha-Beta Search and Monte-Carlo Tree Search are examples of these search techniques.
However, Xenonauts is a game which is far too complex to analyze using classic search techniques. Calculation time would run out before the search technique could give us any decent moves; we cannot make the player wait forever! We therefore guide the A.I. in making what we hope to be correct choices by augmenting it with domain-knowledge.
In this post I’m primarily concerned with knowledge that will influence either the location the A.I. is moving to, or the path it has chosen to get there, i.e. knowledge used for pathfinding.
Traditionally, A* is the technique of choice to find the best path from point A to B in optimal time. But what do we define as being the “best” path? Again, normally this would be pretty forward: in Xenonauts this would be the cost in time units of the unit to move from A to B. This is the so-called cost function in A*.
But what if we were to augment this cost function with different values, each representing some aspect of knowledge we find important for the A.I. to know about? For example: we could say that the A.I. should give a high value to locations which give cover. Depending on the ratio between Time Unit cost and how important the A.I. considers Cover to be, the “best” path could prefer locations which provide cover.
Now, how do we setup a knowledge system that will be efficient enough to incorporate into A*? The world of our agent (A.I.) is discrete in both time (turn-based) and space (matrix landscape); the space is also pretty manageable (50x50x8) in size. Therefore I choose a simple and direct approach: a value matrix in which each location directly corresponds to a cell in the matrix. The cell then has an array of values, each depicting a different aspect of the knowledge system. The benefit of this being that we can use these values in traditional pathfinding techniques like A* with little to no real overhead.
Whenever a location is considered for expansion in A*, we could simply lookup the corresponding values in the matrix which would be filled in an earlier phase.
How to integrate new knowledge
The game itself is turn-based which allows us to setup a event-driven system to integrate new knowledge as it is presented, without having to worry too much about performance.
Knowledge for Pathfinding
So, what does the A.I. need to know about then?
For this you need to evaluate the game design document and look at all the different behaviors required from the A.I.
For Xenonauts, this boils down to different roles/ranks per agent, specific species characteristics and good tactical behavior.
Essentially, the question I asked myself was: To exhibit the behavior outlined in the document, what would the A.I. need to take into account?
Using the datastructure outlined earlier, we assign a specific value to each location on the map for the following aspects:
- Distance to objective
In this instance this would refer to a static object the A.I. is either defending or attacking.
This would indicate whether a location provides cover, and if so, if this cover is not invalidated by any nearby enemies. (I.e. whether the A.I. would be on the “right” side of the cover)
Whether the location is out in the open, or obscured.
Whether any of the enemy agents can see the location.
Whether any of the enemy agents can shoot this location with reaction fire.
The attacking/defending strength of enemies/allies for a given location.
Whether any activity (Gunshots, impacts, explosions, etc.) happened recently on this location.
- Probability of Enemy
The probability that the location contains an enemy unit.
Keep in mind this list will probably change during the rest of this series as I discover what works and what doesn’t.
I then construct a list of all different types of actors and link the required behaviors to the aspects defined earlier. The following is an example for the civilian agent, where “Attracted” defines whether the A.I. should avoid or be attracted to positive values of the specified aspect (and by how much).
|Civilian||Avoid any contact with the enemy||Threat Value||– – –|
|Avoid any active situation.||Activity Value||– –|
|Move through or towards good hiding spots.||Visibility Value||+ +|
|Stay out of sight of enemies.||Sight Value||–|
|Move using cover when possible.||Cover Value||+|
Now we have defined different types of knowledge, and coupled the intended behavior to an appropriate aspect. In the next few blog posts I will show the implementation behind each aspect, finishing with an integration into A*.