ChatGPT: A Cautionary Tale (With Some Positive Takeaways)

I haven’t posted in a while. In truth, there hasn’t been a lot that’s piqued my interest, and there are now elaborate global mechanisms and a squadron of eager commentators prepped and ready to address the issues I used to point at on this humble blog. In November, I could’ve written something predictable about the impact of ChatGPT, but I felt like I’d already played that tune back in 2020 when I attempted to summarize the intelligent thoughts of some philosophers.

ChatGPT. GPT-3. Potato. Potato.

The most interesting aspects of this kind of AI are yet to come, I don’t doubt that. But I am here to share a cautionary tale that syncs nicely with my ramblings over the last 5 (5??) years. It’s a story about reliance and truth. About the quest for knowledge, and how it almost always involves some level of fumbling around in the dark, but never more so than now.

The Uncanny Valley and the Meaning of Irony

There has been a lot of discussion about how human is too human when it comes to robots, bots, and other types of disembodied AI voices. An interest in this topic led to a frustrating Google search which led me to…you guessed it…ChatGPT.

What did we ever do without it? I’m starting to forget.

Now, as reassuring as it is when a hypothesis is immediately confirmed this is clearly a little too thin to write, say, a blog about.

I’m going to fess up: I am my own editor (shocking, I know). And, as such, my own editorial standards are generally conceived on the fly. Nevertheless, I do typically at least try to give references with a little more substance than this..but…

Note to the late adopters: ChatGPT won’t provide links.

It actually suggested a Google search, which I thought was pretty cute.

Anyway, unsatisfied (and somewhat miffed) I push it. I asked what these studies were called and who wrote them.

I want to read the studies.

Perfect! No?

Well, no. A quick (*ChatGPT recommended*) Google search yielded nothing. Not a sausage.*

Fear not. I have been around these parts long enough to vaguely know one of the authors. Indeed, David Gunkel was on the very first panel I ever moderated on AI ethics, in the broom cupboard of a hotel somewhere between SF and Silicon Valley many years ago. We’ve bumped into one another since, and I happen to think he’s pretty marvelous. I follow him on Twitter and he’s one of the non-bot humans kind enough to follow me back. So, I do the obvious.

Now reader, this is the point of the story where you know exactly where I’m going with this, but you want me to go there anyway. I’m here for you. We got this.

Published with the permission of the sender

IT WAS BS!

There is no paper called “The uncanny valley in voice-based human-computer interaction” (even though there definitely should be). The thing just made it up. Of course it did. It gave me what I wanted. Sort of.

So, in the spirit of starting of 2023 with skepticism and also hope, here are my two takeaways for you. If you already read them somewhere else before then question everything, and take a philosophy class.

  1. We still need Google. At least for now. In fact, it’s probably more important than ever. It won’t deceive you like this. It won’t make you look at ass at a party or in front of your boss. It’s not perfect, but it is at least a direct line to actual verifiable information. Mostly.
  2. This thing is incredible. It connected things, synaptically, and generated something new in the world of ideas. I mean you could say: “no it didn’t, it regurgitated things but spliced them together in a way so you wouldn’t notice.” Yes. I’d agree. Isn’t that what an idea is? All ideas are highly derivative. The best ones are the most derivative of all.

Best of all, ChatGPT (or whatever we’ll call this kind of functionality in the future, hopefully not this because it takes far too long to say…) made me reach out to an actual human being. It made me question and it made me think. It made me blog for the first time in over a year.

I wonder if it will ultimately mean a sharpening of our brains as tools, rather than a deadening? Perhaps I’ll ask.

*yes. I know it gave a caveat. I see it. But it amounts to “this information may or may not be worthless.”

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s