AI in Creative Projects: Still Mostly Useless

I’ve been writing a novel. While in general I’m not interested keeping my projects very secret, this one feels like an idea I want to keep close to my chest for now. That doesn’t stop me from bringing up various uses of AI during this process.

First, not one word has been written by an AI. That’s the fun of this project. I want to write, because I like to write and it helps me get away the stresses of my job. There wouldn’t be much point to this if I just let AI do the writing. I will, at some point, go through the text with an AI tool, probably ProWritingAid, to fix the inevitable grammar problems, but even then I won’t allow the AI to make any decisions, just make suggestions.

Anyhow, the reason this post came about is this: I was playing around with Scrivener, an excellent, if a little buggy, writing tool. I’m at about 33 thousand words at the moment, about a third of the way in, so I just became curious about how many pages this would be, so I printed it out into a pdf. Apparently, 108 with all the front and back materials. But when I opened it, Acrobat did it’s usual thing of asking me whether it should summarize the document for me. Usually this is useless, because I already know what the document is about, because I opened it for a reason. However, I was intrigued about how an AI would understand my text, so I let it do it.

And yes, it mostly got where I was going correctly. Of course, the text is just one third through, so obviously it couldn’t have a holistic view of the whole thing, especially since I have a tendency to write in a way where I start at the beginning, then kind of make a promise of where we are going by writing the end (which I will change later, but a goal of sorts anyhow, but this is also a decidedly un-AI way of working, as an LLM is not able to work this way). It did identify certain themes that aren’t written out very blatantly (at least to my understanding), but it also missed quite a bit.

For example, there’s a lesbian couple who are in the beginning steps of their relationship. The AI just assumed they were friends. There are historical precendents to this where intimacy between two people of the same sex is assumed to be completely platonic. However, there is a part in the book that is titled “Garden party?” in which the two of them discuss and one of them offers to perform cunnilingus. That should be pretty obvious, but the AI just doesn’t take that into account. Possibly, because it has been told not to. There’s even a chapter titled “The mandatory sex scene” with the two of them although it has no sex in it, just some cuddling, which is still something you would, in general, link to relationships.

But okay, if the AI assumes they are friends, does that mean I should make the whole thing more obvious? There’s a reason why I haven’t wanted to dwell on it (before the aforementioned cunnilingus discussion), so if I would change the approach, I would be reacting to a reader that is potentially purposefully obtuse. Yet, that kind of a feedback is immediate, so it can easily feel like you are achieving something if you react to it, so it is tempting. SHould you? Probably not, because it is not a human. You are not writing for it. Or if you are, there’s something deeply wrong with you.

This doesn’t mean AIs have been completely useless. Just mostly. Of course, Google comes in quite often when I need to check something, but because it is now so bad, I often need to ask ChatGPT about certain things. Like in which modern conflicts would you find female soldiers in? That’s easy enough, but when you get to details, ChatGPT is very helpful. Like what kind of a rifle would a YPJ (Kurdish Women’s Defense Force) sniper use? It became much easier to check after having ChatGPT give me an answer. However, when I asked ChatGPT to use DALL-E to give me an image to compare the length of the rifle to the height of the soldier, for some reason it just couldn’t figure out that I wanted the rifle to be vertical, not horizontal. They have a tendency to mess up the simplest of things, so you still can’t trust it with anything.

I have checked ChatGPT on some other things, but mostly because I have a very limited number of books on occultism and demonology in my collection. Even Project Gutenberg doesn’t have very many. Yet, apparently, LLMs are pretty familiar with them, but not really, because it seems to disagree with the books I have on my shelf. This is usually an indication that it only had limited number of sources to learn from, so the stats aren’t quite where they should be (and LLMs work specifically based on stats). Probably because of this I couldn’t confirm everything regarding YPJ either, so there’s a chance at least some of it’s claims were false.

I also tried to get it to explain to me what would happen if the speed of light was a little different, but it wasn’t giving me very interesting answers.

I guess that’s a puzzle for you: what kind of a story requires information regarding YPJ, demonology and speculative theoretical physics, but also includes a discussion on cunnilingues. Not that all of these get as much focus.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.