The Real Problem with Generative A.I. — and It’s Not a Dystopian Nightmare
Ask most people to describe the downside of Generative A.I, and they’ll usually bring up one of two scenarios.
People who know nothing about Generative A.I. will talk about a foregone conclusion when the machines eventually rise up and take over. This is Skynet. It’s the Matrix. It’s any science fiction movie about robots. Except Short Circuit.
People who know a little something about Generative A.I. will talk about the threat of impersonation, deep fakes, and machine-driven fakery being passed off as authoritarian knowledge.
While both of these potential crises are theoretically possible, and thus the basis of great science fiction, neither one are the most imminent threat.
The real, immediate problem with ChatGPT and Generative A.I. is that words mean things. Unless you’re using ChatGPT. Then words become useless. Even dangerous.
Was This Post Really Written By a Human?
And it was written by a human who has been dabbling in Natural Language Generation for more than a decade, including helping invent the first commercially available NLG engine and building one of the first NLG companies, Automated Insights, which we sold to private equity way back in the low-tech days of 2015.
The side of the engine I created was the technology and the algorithms that took the data and turned it into useful words, which meant making it human, which meant figuring out what it means to be human.
After spending way too much time gazing into that navel, I can assure you that the broadest issue with Generative A.I., and especially ChatGPT, is not that humans won’t be able to recognize it as machine-written and accept it as the “real thing.” It’s that humans will be able to recognize it as machine-written and accept it as the real thing.
In terms of adding knowledge to the Internet, we’re adding more hay to the haystack, not more needles.