ChatGPT and Singularity

This is very, very random even for this blog, but here we go. Here’s a hypothesis: When we reach singluarity in AI, it won’t be the Ai becomeing smarter and smarter, its us becoming stupider and stupider until the AI, which can mimic a mediocre human just happens to be smarter. I’m not saying Idiocracy. This is something completely different instead.

And yes, this is pure speculation.

First, if you are unaware what a singularity is in this context, its the term used for a situation where an AI is smart enough that it surpasses human understanding. The name comes from physics, but is faulty. In physics, a singluarity is the (theoretical) point in space that is the center of a black hole, where the gravity is so strong that time and place cease to exist. What these people actually mean is event horizon, which is the point where light can’t escape, so you can’t see what’s on the other side.

Anyhow, as a teacher in higher education the repercussions of these AI applications is going to affect my work. And yes, these things can write essays, but it just so happens that these essays are not very good. The grammar and structuring is impeccable, but they also seem to produce a lot of sentences that simply don’t say anything.

At the same time, the information it uses is based on a concensus from the Internet. However, the concensus on the Internet is not what you want. Would you rather have the opinion of one doctor regarding your health than the concensus on the Internet? Would you rather cross a bridge or use an elevator, where or the calculations were done by an engineer or the concensus on the Internet? You see where I’m going? On top of that, apparently, these AIs are very bad at math. Sure, they can calculate pretty much anything, if you ask it directly, but when an expert needs math, they have to understand the situation to figure out what to take into account and for some reason, the AIs suck at this. They just throw together numbers. Sure, they get the answer right, but the question is wrong.

So, relying on these AI will make us more stupid. And when we become more stupid, the concensus, which the AIs rely on, will become less useful. If we rely on AIs to create anything new, it really can’t, as it will only regurgitate that concensus again, so there will be nothing new there.

Again, this is pure speculation, but I do feel we should be careful with this shit. It is not really serving us as humanity.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.