Artificial Intelligence for the GM

In fiction, AIs are mostly depicted as people with Asperger’s. Its probably not that far off from what a real AI would be like. On the other hand, as people have very varying ideas about what Asperger’s actually is (and my perspective might not be that much better), we need to delve a little deeper.

We are living in an age where artificial intelligence is taking over. Self-driving cars are pretty much a reality already, robots are now quite autonomous, and Google Translate actually came up with a new language to help in translations, which it did on its own without being instructed to do so by its developers. And we’re pretty much just tipping our toes into the vast sea of possibilities.

A Bit of History

Back in the fifties there was a workshop to decide the name for this burgeoning discipline now known as artificial intelligence. That title wasn’t the only candidate. The forerunner was actually computational rationality. Doesn’t quite spark the imagination in the same way, does it? This is of course speculation, but would there ever have been 2001: A Space Odyssey, if we’d been left with that name. (Actually, probably, but you get the point.)

For the longest time the history of artificial intelligence was a series of innovations and inventions in the field of computing, but pretty much the result was “that’s cool and all, but I don’t really see it as intelligent”. Its still going on. There are systems that would have definitely been considered “intelligent” in the past, that are now just… meh.

Also, movies have raised the bar immensely.

The BDI-Model

Most AIs don’t really work this way, but its a nice model for theoretic discussion. Its quite simple, actually. The abbreviation is for Belief-Desire-Intent.

The AI has desires, which are simply goals it wants to achieve. An automated car would like to see its cargo (people or otherwise) get to their destination. It has other goals properly prioritized, such as economy, safety, timeliness, refueling and other maintainance, and so forth. The priorities will also change. Sometimes refueling will be very high on the list of priorities (but still shouldn’t override safety), because you can’t go on forever without doing it. Right afterwards, it will be the lowest possible.

The AI has beliefs, which is basically its view of the world. The automated car would (hopefully) see the cars next to it and thus function under the belief that there’s a car a meter to its left, moving at similar speed. It also knows maps, speed limits, whether or not the people inside are correctly strapped in and so forth. Some of these beliefs are pretty constant, others change constantly. Part of this might be exchanging information with other cars, and probably other sources as well.

Beliefs also include all the possible actions that can be taken. Braking, accelerating, turning the wheel, signalling, putting on lights, and so forth.

Third part (and I know the parts aren’t in this order in the abbreviation) is about intent, or plans to get to the desires based on beliefs. Again, the automated car would choose a route, speed, and pretty much everything you need to make a decision on while driving. For you, all of this is quite instinctive, but if you were the one writing the code, you’d have to take quite a few things into account. Things you don’t really think about at all.

The plan is a series of (often concurrent, in this case) actions, but there are many levels to the plan. Although the greater plan is to get to a certain place, and is pretty much a list of smaller objectives you need to get to first. However, there’s a more immediate plan (or a bunch of plans) which let you avoid obstacles, know when to accelerate, and so forth.

The plan is never set in stone. It needs to change constantly. Traffic is dynamic and you can’t predict everything when there are thousands of moving parts in your vicinity, and events far away (such as accidents, or heavy traffic) can affect your situation (when people take alternative routes and thus possibly overburden certain other parts of the network).

What Can Change in the AI Itself?

In The Incredibles, the bad guy has built a combat robot that has the ability to adapt to the situation and thus it keeps on becoming more dangerous. It has gone through many iterations, as the bad guy has taught it by having it fight various superheroes in the past. (Most of whom are dead, the movie is surprisingly dark for a purported family movie.)

The Pinocchio story is another commonly seen theme. Think Data from Star Trek: The Next Generation, some of the Cylons in the new Battlestar: Galactica, Vision from the Avengers, and so forth. These artificial beings want to be like humans for whatever reason and they have to learn that. (This is also subverted in Her, where Samantha – the operating system – finds it fairly easy to learn to relate, while Theodore, her human owner, finds it incredibly hard.)

Often the key to AIs in fiction (and these days in real life) is the ability to learn and adjust to the situation (or its beliefs). There was an interesting experiment with an AI playing Civilization. At first, it didn’t even know what it was supposed to be doing, but it did learn over time.

But actually, the AIs are not only learning on their own. They are also constantly exchanging information. If one robot somewhere makes a mistake and notices it, it can learn and propagate the information to every other robot that works in the same field. We humans learn a lot slower. We usually have to actually make those mistakes ourselves. Robots don’t have that limitation. They are easily copied and as long they can maintain a connection, why would an AI even be forced to limit itself to only one body anyway?

Actually, in general its easier to have a bunch of separate AIs that are either self-organizing and controlled by a central brain. There’s actually a good example of this in nature. Octopuses (or octopi, if you prefer) have nine brains. Each tentacle is basically independent, but has certain goals given to it by the central brain. That central brain just doesn’t have to try to make calculations for each of the complicated masses of muscles and suction cups. Instead, those make those calculations themselves, all working towards the common goal set by the central brain.

And we know all this works very well. Octopuses can use tools, they are known to escape their habitats in aquariums and they are quite intelligent in general. Basically, it has a hierarchy, where one brain does the strategic thinking, while the others implement the ideas on their own, leaving that central brain to make decisions more effectively.

Adversarial Reasoning

There are robots that can win RPS against humans consistently. Why? Well, they cheat. They just actually look at your hand before they make the decision, but they do it so fast that you can’t see it happening.

However, there are also AIs that can win in Poker consistently. Of course not always, because there is still randomness involved, but as everyone who has played enough Poker knows, its not only about luck. Its the ability to outsmart your opponent. And AIs are doing exactly that.

Its actually a complicated game theoretic problem. They don’t necessarily know what you are going to do, but they model different options and try to find a plan that “covers most bases”. They want to answer what the opponent is doing, but they don’t want to leave themselves open to “next leveling”, where someone anticipates their move and does what’s best against that particular action.

Of course, they can probably always fall back on just being a fraction of a second faster than you and react rather than predict. Even experienced humans can do this instinctively.

Anyhow, returning to the BDI, the AI can use its understanding of its adversary to form a similar model of how their adversary sees the world. Then they can form plans based on that information and form their own plans based on this. With opposing forces, this can lead to a series of oneupmanships, where one party tries to outthink the other, which in turn leads to both of them taking steps in order to mitigate possible blowouts from angles they didn’t predict or letting their opponent risk it and try something that’s been deemed too dangerous by the model and having it work.

In order to do at least somewhat reliable decision-making here, a lot of information is needed and thus the AI would emphasize that in its actions.

All in All

So, based on this, your general AI would try to play it safe and not let anyone gain advantage. Its has a lot of patience, so it can work through things by gaining small advantages, unless it has time limitations.

On the other hand, it can also be very aggressive, because it can multiply itself easily. Sending a copy of itself into battle just to lose is not a problem if it can reach another goal that way. That goal could easily be something like learning how this particular opponent behaves, although it is also giving information about itself, which is quite valuable in this situation, since of course it assumes the adversary takes the same steps in its own decision-making.

It can also employ “servants” that share certain goals with it and rely on them to work certain problems out.

Of course, this is all quite simplified and actually based on something I did during my studies over a decade ago, so things have changed a lot. How wrong am I in these assumptions? I don’t know. Still, this should be a pretty good basis for using AIs in your stories. Just remember to make them careful enough, and let them anticipate a lot of things, but don’t make your players feel like you are cheating. Its a careful balance.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.