First, it’s fine not to use them. I mean, I’ve done pretty well for decades without one, so how much is it going to help if I start using them now? Not much. Images would be helpful, because I can’t really draw, but I’ve always been very disappointed with the results from image generators. Video generators are worse.
The prompts used to make this video were:
- Two geeks playing Magic the Gathering
- Player on the right goes to the attack step.
- A judge comes over and stops the game.
- The judge adds counters to a card.
- The player on the left disputes the judge’s decision and wants the head judge.
So, below I’m going to be talking about where they can actually be helpful.
It should be noted that I teach this stuff for a living. Well, part of my job is teaching how to utilize AI in your work, whatever that work is. Mostly my audience is either other teachers from various fields or IT students. For this reason I do follow the field quite a bit and I cover it often from a very nontechnical perspective.
My main message is this: If I could choose between getting rid of the current generative AI and keeping it, I would just snap my fingers and get rid of it without hesitation. But that is not the world we live in. We now have that technology and we have to live with it. In practice it means that most of us will have to work harder to take up the slack from the people who think they can do their job with AI. So, on top of doing your job, you are going to have to be constantly checking everything for signs of AI usage so that you can fix it in time. (Or you can be the problem.)
Personally, I do use AI a little bit in my work, but mostly as a sanity check. I plan out a curriculum or a course or a lesson and then I check with an AI what it would have done with the topics involved. That’s pretty much it. Is that worth all the energy usage of these systems? Is it worth pirating everything on the web? Is it worth all the future and past deepfakes? Is it really worth all the accidental and purposeful misinformation and disinformation? Not really. Also, if I just called up a colleague and asked them to give me that very same sanity check, I would probably get better results, but it would take more time. This does mean that AI also has a tendency to isolate us even more than we are being isolated now. If we look at this from a point of being a GM and you are outsourcing some of the creative part, you would probably get better results and also have a little of that meaningful human interaction by talking to a peer rather than a machine.
Here’s a general rule you should always follow when using AI: If you know something well enough, you don’t need them, and if you know nothing about the topic, you can’t be critical of the responses, so you have to check everything from other sources and it’s about the same amount of work if you would just do it yourself, so the sweetspot for using AI is when you know something about the topic, but aren’t a real expert.
In the case of RPGs, there is an additional problem here: While many RPG related books have been fed into the machine, there just isn’t enough sources to model them properly. The reason why the training generally requires billions of documents, is that you need enough statistical data to have the model be able to do it’s job. For RPGs, there just generally isn’t. Often there is a single book that covers everything. So, if you want to ask specific questions regarding rules or the world of a specific RPG, you probably won’t get very good answers. It will get some things right, but it will also fail spectacularly in most cases.
This doesn’t mean that AIs are completely useless. Just mostly. I’ll go through various situations where I’ve found them useful. I will link to examples, as you can share discussions from ChatGPT.
One final note before we get to the actual meat here: Before ChatGPT was released, I read an article from government official here in Finland. I don’t remember his name, although I could probably find it. The message he wanted to convey was that we shouldn’t think of these technologies as human-like intelligence but what he called “support intelligence”. They can’t think for themselves, but they can help us think. This is the approach I would always recommend.
Avoid Too Much Prompt Engineering
So, if you follow the field you have heard stories of people being hired to prompt engineer, so you might think that is the key to using these models. That is not true and that is often a bad way to use them. Prmopt engineering is actually something much more complicated. It’s about priming a specific AI in such a way that it gives answers to prompts in a certain way. This is very technical and requires quite a bit of math, not just simple writing. Sure, the examples in the next sections are mostly just one prompt, but in practice the best way is to discuss.
Here’s an example that started as a joke, but turned out to actually be quite interesting. I could have gone on longer, but I just wanted to show you how it could support your creativity. You shouldn’t let it make any decisions, but give you ideas. As you can see, it is actually able to come up with something based on a relatively obscure Swedish movie about people trying to break the banality of their existence by performing music in weird places and improvised instrument combined with a political satire about the complicated relationship between UK and the US with the best profanities on the market.
Now, how much of that is actually usable. Not sure. It would definitely need a lot of editing to get there and you would probably have to cut out most of it, but there is a root of an idea hidden beneath. Was ChatGPT creative? No. It is just helping me be creative. Again, I would have gotten to most things here (except maybe the names and Monotopolis is actually from an obscure Disney comic), but this would have cut down on the time. How much? Not that much actually. Now the work is just elsewhere. Before I would have been coming out with ideas and recording them, now I have ideas and I’m cutting them down to something usable and adding my own on top of it (with continued discussion with ChatGPT probably). You could say this is moving from a writer into being the director in movie terms.
Which one do I prefer? The former. It’s just more fun to do it that way, but being creative is often mostly figuring out new processes on how to be creative and this could be one way of doing that.
Handouts
Here’s an example of something I was planning to do in a game, but never happened due to scheduling problems. The idea was that the characters have been contracted to do this investigation, but I felt I couldn’t get the tone of the letter they received just right, so I let ChatGPT do it for me. I still wrote the original letter, because there was a lot of information I wanted to convey, but it just didn’t feel right.
Since ChatGPT is a an LLM (large language model), this is the kind of thing is has been designed to do. It is not being creative itself (as it can’t), but it can help me to achieve what I want. It even tries to explain what it went for. At the same time, you can see from the first line of the answer that it didn’t quite understand what’s going on, so you have to still read through the text and fix it if necessary. I did find some of the flourishes it added quite nice, such as the mention of the Earth Gods right at the end even though it actually doesn’t quite work, because the implication in the original text is that the Earth Gods are just one form of religion in this region or world.
And no, you can’t quite figure out what this was about or what the game was supposed to be from the context clues in the text, but you might get close. (Hint: It’s a mix of different things.)
If you feel you don’t need to get specifics right, you can just ask the ChatGPT to write you a handout. Here’s an example. In this case, you have to note that it’s idea of a medieval fantasy is quite specific and this wouldn’t work for whatever I would like to run, so I would have to either try again with more context or just rewrite what it gave me.
I do find it kind of funny that the AI felt necessary to use a spreadsheet here…
Evil Overlords or Other NPCs
So, one thing I find to be problematic is that when I plan for my bad guys my paranoia kicks in and I try to think of everything. So, someone came up with the idea of Evil Overlords. They were kind of a third-party players you could talk to between sessions and your actual players might not even know about. They would be responsible for certain characters of import to the overall world. So, why not let an LLM play one?
ChatGPT is actually pretty good at this, but you have to be careful. If you don’t give enough context, it will give you bad answers, but if you give too much context, it will lean on it heavily without adding anything. Also, it won’t understand your world no matter how well you try to explain it. Still, you can get some ideas out of it.
Here’s an example of going overboard with the context and here’s an example of a shortened version which gave very different results.
As you can see, in the former ChatGPT tries to include everything in some way into the answer, so it isn’t giving me anything interesting. In the latter it’s giving me something I can work with.
You can give ChatGPT documents as well, so maybe you could have a something like a pen pal for a character played by it. Since you can share the discussions, you could let your player do the interactions themself, but I wouldn’t advice that, since ChatGPT can, once again, misunderstand things and fixing them can be difficult. Also, if you give this kind of access to players, remember that you can’t see the continuation of the discussion unless those players share their version with you every time they add something.
Closing Statemtn
Big corporations are trying to commercialize everything. Social media has commercialized our personal networks, dating apps have commercialized our dating lives, streaming services and their endless content has commercialized our attention. Now generative AI is being used to commercialize our creativity in a new way. Whether you want to be part of that is up to you. It is happening and in some ways has already happened (although often in a very bad and lazy way), but that doesn’t mean that you have to be part of it.
These “tools” are nice toys and should be seen as such. You can’t rely on them, but you can play around with them. Play is just an unserious way to learn (or should be) and that is, one new way, how you should approach AI.