Xenonauts A.I. - Knowledge Systems pt. 1

A traditional approach in A.I. is to search for an optimal strategy for the agent to follow. This is done by using traditional search techniques to search through the set of all possible actions (and consequences) an agent can make. Alpha-Beta Search and Monte-Carlo Tree Search are examples of these search techniques.
However, Xenonauts is a game which is far too complex to analyze using classic search techniques. Calculation time would run out before the search technique could give us any decent moves; we cannot make the player wait forever! We therefore guide the A.I. in making what we hope to be correct choices by augmenting it with domain-knowledge.
In this post I’m primarily concerned with knowledge that will influence either the location the A.I. is moving to, or the path it has chosen to get there, i.e. knowledge used for pathfinding.
Domain Analysis
Traditionally, A* is the technique of choice to find the best path from point A to B in optimal time. But what do we define as being the “best” path? Again, normally this would be pretty forward: in Xenonauts this would be the cost in time units of the unit to move from A to B. This is the so-called cost function in A*.
But what if we were to augment this cost function with different values, each representing some aspect of knowledge we find important for the A.I. to know about? For example: we could say that the A.I. should give a high value to locations which give cover. Depending on the ratio between Time Unit cost and how important the A.I. considers Cover to be, the “best” path could prefer locations which provide cover.
Datastructure
Now, how do we setup a knowledge system that will be efficient enough to incorporate into A*? The world of our agent (A.I.) is discrete in both time (turn-based) and space (matrix landscape); the space is also pretty manageable (50x50x8) in size. Therefore I choose a simple and direct approach: a value matrix in which each location directly corresponds to a cell in the matrix. The cell then has an array of values, each depicting a different aspect of the knowledge system. The benefit of this being that we can use these values in traditional pathfinding techniques like A* with little to no real overhead.
Whenever a location is considered for expansion in A*, we could simply lookup the corresponding values in the matrix which would be filled in an earlier phase.
How to integrate new knowledge
The game itself is turn-based which allows us to setup a event-driven system to integrate new knowledge as it is presented, without having to worry too much about performance.
Knowledge for Pathfinding
So, what does the A.I. need to know about then?
For this you need to evaluate the game design document and look at all the different behaviors required from the A.I.
For Xenonauts, this boils down to different roles/ranks per agent, specific species characteristics and good tactical behavior.
Essentially, the question I asked myself was: To exhibit the behavior outlined in the document, what would the A.I. need to take into account?
Using the datastructure outlined earlier, we assign a specific value to each location on the map for the following aspects:
- Distance to objective
In this instance this would refer to a static object the A.I. is either defending or attacking. - Cover
This would indicate whether a location provides cover, and if so, if this cover is not invalidated by any nearby enemies. (I.e. whether the A.I. would be on the “right” side of the cover) - Visibility
Whether the location is out in the open, or obscured. - Sight
Whether any of the enemy agents can see the location. - Reaction
Whether any of the enemy agents can shoot this location with reaction fire. - Threat
The attacking/defending strength of enemies/allies for a given location. - Activity
Whether any activity (Gunshots, impacts, explosions, etc.) happened recently on this location. - Probability of Enemy
The probability that the location contains an enemy unit.
Keep in mind this list will probably change during the rest of this series as I discover what works and what doesn’t.
I then construct a list of all different types of actors and link the required behaviors to the aspects defined earlier. The following is an example for the civilian agent, where “Attracted” defines whether the A.I. should avoid or be attracted to positive values of the specified aspect (and by how much).
Type | Behavior | Aspect | Attracted |
---|---|---|---|
Civilian | Avoid any contact with the enemy | Threat Value | – – – |
Avoid any active situation. | Activity Value | – – | |
Move through or towards good hiding spots. | Visibility Value | + + | |
Stay out of sight of enemies. | Sight Value | – | |
Move using cover when possible. | Cover Value | + |
Now we have defined different types of knowledge, and coupled the intended behavior to an appropriate aspect. In the next few blog posts I will show the implementation behind each aspect, finishing with an integration into A*.
[…] it for today; this series is continued in: Xenonauts A.I. – Knowledge Systems pt. 1 ← Promoting your App: Admob Review Xenonauts A.I. – Knowledge Systems pt. 1 → […]
Cool! And thank you so MUCH!
Now I see, in project Xenonauts appearance smart programmer, who knowing what need do!
Thanks! But don’t underestimate the rest of the team; I’ve seen the code and they’ve done excellent work so far! 🙂
Last time I worked on a “fighting” AI in 3D realtime, I had to cheat heavily to make it run… at all.
Like… an attacking ship would send a “memo” to the defender, informing it about it’s soon-to-be-threatened state. Since this information was handled centrally for a “formation”, I got something resembling “cooperation” with very little overhead.
Real assessment of every single bullet or missile in flight simply wasn’t an option.
Sounds backwards (well, it was) but if the end result is believable, noone complains. =)
Very true; when dealing with continuous systems (realtime/3D) you can try to discretize the space through the use of graphs, or time through the use of events on intervals (like you did). It’s very hard to dynamically analyze the full extent of the environment in realtime, so graphs really help to reduce complexity.
For things like this I try to use emergent behavior and sandboxes setup for testing A.I. like M.I.T.s Battlecode.
(As emergent behavior is hard to get to work good enough)
Godspeed to you, Gijs-Jan! And be with us until release Xenonauts and further!
Nice choice of tools for the job, I’m interested to see how it pans out with the current list of affecting features.
Tweaking the weights / features sounds like it may have to be done manually, and could get quite dull. Thanks for the update!
Hello,
very interesting to read, hope there will be more of these articles! Good luck with working on the game (and have fun too 🙂 ) …
Hi,
very interesting reading, indeed.
A technical observation. What heuristic do you plan to use? A generalization of Manhattan Distance for 3d? Euclidean distance? That heuristic, while admissible, will be very poorly informed, since it won’t be taking into account the cost structure you’re adding nor the obstacles in the way. Some of those obstacles are ‘dynamic’, in the sense that the NPCs and PCs can interact with them (say a door, which can be closed or opened). Besides that, NPCs and PCs can blast holes in walls, potentially changing the cost of cheapest paths between some pairs of cells in the map.
An interesting setting, I look forward to see what solution do you come up with.
On the other hand, one variation of A* you might or might not be aware of, is extremely easy to do, and that would probably speed up considerably the search time – in exchange of finding plans of less quality – is to use Weighted A*, especially any-time variations of that, that use a very high W value to find quickly a solution and then, until the deadline is met, they search for better solutions, using the cost of the solution previously found to prune the search space.
Good luck!
Very interesting articles!
A suggestion, could it be a idea to have civilians to move towards the helicopter and the Xenonauts landing site? after all should that not be the safest place?
I lurk about on the Goldhawk forums, but somehow missed the post about this blog until today. I read the top post, and was happy to learn about the blog, but a little disappointed to read “it shouldn’t be technical.” Because, you see, I’m a university student currently studying AI, and have been wondering how the AI for Xenonauts was being implemented.
I’ve been wondering if what I’ve learned so far about AI has any application to what GJ has been doing, but, of course, what I’ve been learning is “technical.”
Well, Alpha-Beta pruning, , Monte-Carlo, A*, heuristics, etc. — cool, what I’m learning has real-world uses 🙂
The main “problem” with search-based techniques (Monte-Carlo Tree Search, Mini-Max (with its extensions; A-B, Negamax etc) is twofold:
– They require gamelogic that is efficient and setup to be simulable (MCTS) or estimable (MiniMax, etc).
– The world you want to search through shouldn’t be too complex.
I first wanted to use MCTS for Xenonauts, as it is a favorite of mine (See my bachelor thesis). However, I arrived late to the project and was confronted with a game engine that was not optimized for repeated simulation. Besides this fact, the project itself is (in all probability) too complex to simulate or search through without severe abstraction. However it was a path I wouldn’t have been able to explore without rewriting large parts of the code.
The technique I settled on is a compromise between “traditional” search-algorithms and abstract planning systems used in logic planning (HTN- and STRIPS planners). I am currently working on pt. 2 (Planners) & 3 (Logic specific to Xenonauts) of this series which will go into detail of the implementation, and also explain the “technical” parts.
You can find a good article on the technique I’m currently implementing (or variation thereof) on:
http://www.cgf-ai.com/docs/plannedassault_ai_planner.pdf