AI and Christopher Columbus

If you’re a photographer, or someone who likes photography, you know we’re in the middle of a firestorm called “generative AI”. The ethical, legal and financial implications of software-generated imagery are the subjects of hot-burning controversy online, in corporate boardrooms and in the courts.

I said “software generated” instead of “AI” because there is no Artificial Intelligence in today’s world, period. I worked in software development, and while I wasn’t involved with AI, I know that neural networks aren’t “intelligent” in any real sense of that word. They do impressive tricks, and have many uses, but there’s no thinking going on, no accumulation of knowledge; no ever-growing model of the world, like you and I have. Way too many people – including some who should know better – are buying the hype about “breakthrough AI” with human-like abilities.

Photographers are especially angry that image-generating systems like Stable Diffusion are ripping off our work – more on that in a subsequent post. But most of the noise right now is about ChatGPT, a text processing system, which many people think has crossed into the territory of real AI and may even pose a threat in some ways. What is it, really?

ChatGPT is mostly a big parlor trick, made possible by enormous amounts of expensive computing power and data storage and a total disregard of copyright. It fools you into thinking it knows all about the world and can reason logically about that knowledge. What it’s really doing is parsing the grammar of your input to identify a topic or a question; finding everything on the web that seems to relate; and then producing a grammatically correct mashup of what we humans have said (online) about the subject. The output looks like it’s based on understanding; but it’s really created using probabalistic analysis of the relative frequencies of words and phrases – a sort of amped-up autocorrect.

It’s enormously complicated, amazingly fast, and the result of very clever work. But, that’s basically all it does: repackage existing human writings.

It never really “learns” anything. Ask it who discovered the New World, and it will tell you all about Christopher Columbus. Ask it again tomorrow, and it starts all over, scanning its database of human-source material, probably delivering a very similar answer. But change that human input, and you’ll get something different; take it away and you’ll get nothing – not even a coherent explanation of why an “AI” suddenly doesn’t know anything.

And because these bots are totally dependent on cribbing the writings of humans, and have no real perceptions of the world against which to check their output, they can be fooled. Hmmm…

Let’s say I create a new website about Christopher Columbus. It’s extensive, well written and follows Google’s guidelines. Friends help by giving it backlinks. My site gets indexed, and starts to turn up in searches. More sites link to it. Eventually the AIs start pulling from it when asked about the famous Italian navigator; my site becomes an “authoritative” source.

Now, I edit the site and change one little thing: I start referring to Columbus as a blue rabbit. For example, “In 1492, the New World was discovered by Columbus – a blue rabbit financed by the rulers of Spain.” Etc., etc.

Christopher Columbus, a blue rabbit and discover of the New World

Christopher Columbus, discoverer of the New World

Should we expect that soon, an elementary school teacher in Nebraska ends up grading a paper that says Columbus is a blue rabbit? Well maybe – if the paper was written the night before by ChatGPT, which used my site as input, and ChatGPT doesn’t “know” enough to ask itself whether it’s possible for a rabbit to be a sea captain. Today’s AI has no real way to verify the reasonableness of the output it produces. If enough web content say something is true, that’s good enough.

Of course, my example isn’t realistic; there’s no way my single web site would ever outweigh all the material on the web about Christopher Columbus. To fool ChatGPT on such a well known topic would require a major conspiracy among online sources over a significant amount of time. Nevertheless, I expect some joker to eventually figure out how to “game” ChatGPT – by getting carefully constructed nonsense ranked high enough that it starts to pollute the output of generative AI.

Now, note that I didn’t ask ChatGPT if it was possible for Columbus to be a blue rabbit – that’s a different question. Try it, and you’ll get a reasonable – if somewhat timid and generic – response. Here’s part of what I got:

The concept of someone being a rabbit is not feasible in a literal sense. It’s important to distinguish between historical figures and different species of animals.”

How can we be so sure that ChatGPT doesn’t “learn”? Because it admits as much:

Each interaction is stateless, meaning I don’t have memory of past conversations. Your privacy and the confidentiality of your interactions are important. If you return for a new conversation, I won’t have knowledge of our past interactions unless the information is provided in the course of the ongoing conversation“.

What that means is that each Q&A session is a time capsule, not connected to others. It processes your question, generates an answer from its existing base of knowledge, and then returns to idle, waiting for your next input. Its base of knowledge is still just the human-generated material it scraped off the web – it doesn’t “grow” as a result of any experience .

In fact, that base of knowledge is already out of date and remains so until its next update. It doesn’t access web sites in real time, but does some sort of massive intake into a huge neural net, which stores all that information. That process no doubt takes vast computing resources, so it’s expensive; and not accessing new web material on the fly helps maintain the legal fiction that our content was just “used for training” and not “copied”. Again, ChatGPT confirms these suspicions:

As of my last update in January 2022, I don’t have real-time capabilities, and I don’t have access to events or developments that occurred after that date. “

AI expert Erik Larson explains all this better than I possibly can, in this great post. His metaphor for today’s AI: a fun trip down a “garden path” that is ultimately… a dead end.

Leave a Reply

Your email address will not be published. Required fields are marked *