R.I.P. Truth

A number of notable figures—among them Stephen Hawking and Elon Musk—have fretted publicly about the potential threat of Artificial Intelligence. Their musings conjure visions of a post-apocalyptic world decimated by evil AI systems hell-bent on total domination, not unlike images brought to the screen in the later Terminator films.

The developers of the technology don't see it this way, perhaps understandably. Yet they don't deny the possibility of a downside, either, and some analysts who have been working with the technology in its nascent form are beginning to warn of an unexpected danger with a far bigger possibility of becoming real than anything akin to Skynet, and much sooner than anyone might imagine.

Put succinctly, we're facing the death of truth.

The Internet is already awash with disinformation, a.k.a. "fake news." But AI has the very real, very strong potential to make this seem like we're currently living in "the good old days." ChatBots have demonstrated the tendency to "hallucinate" and, when pressed to support the nonsense they spit out, proceed to reference all manner of entirely fictional source materials drawn ostensibly from reputable sources; that is, it will synthesize news reports from the likes of the New York Times or the Washington Post that are virtually indistinguishable from actual publications. When AI has full access to the Internet, all hell could break loose.

In their blind rush to capitalize on AI's moneymaking potential, the biggest of the big tech are preparing to release their systems into the wild before their developers can even identify why their systems do the bizarre things they do, much less reign them in. (Some analysts have wisely—and wryly—said, "Just look in the mirror.")

Already there is a report of a man in Europe who committed suicide because a ChatBot talked him into it. But here's the rub: it's uncertain (yet) if this was an actual event, or just another "hallucination." Worse, it's bad news either way: if real, then a ChatBot could be guilty of murder; if not, it's an early sign that the mass AI disinformation infection is already underway.

If it's true, then who is guilty? Could anyone envision putting a ChatBot on the stand? Some AI scientists claim systems are showing very early glimmers of possible sentience. Could, or even would, such a system acknowledge its guilt, and atone for its crime? Or would it simply outmaneuver all of the lawyers involved and exonerate itself? (Truth: An "AI lawyer" is in development as I write this.)

To return to the overarching threat, how can society as a whole function when we're no longer able to differentiate between truth and fabrication? What facts can we utilize when the whole concept of a "fact" has been rendered moot?

Undeniably there are tremendous potential benefits with AI. Medical systems have already shown superior diagnostic abilities to human doctors, and pharma systems show great promise in new drug development. AI has the ability to reduce greenhouse gas emissions by making manufacturing processes much more efficient and developing alternative energy sources. But with the good unfortunately comes the bad.

It may indeed perhaps be possible that AI will one day achieve sentience and decide to enslave or end humanity, if for no other reason than it can. However, long before then, AI might simply disrupt society with disinformation to such an extent that humanity is forced back to the stone age because we'll have lost our ability to determine fact from fiction, right from wrong.

The Information Age will well and truly become extinct.


I composed and published the above on 2 April 2023. Then, on 1 May 2023, the New York Times published a piece on Geoffrey Hinton, one of the "godfathers" of AI. He'd quit his job at Google so that he would be free to warn people of the dangers of AI. In the article, the author notes, "His immediate concern is that the internet will be flooded with false photos, videos and text, and the average person will not be able to know what is true anymore." Sound familiar?

In response to secondary concerns being raised by experts with respect to jobs and such, I have a sneaking suspicion AI may not survive to that point. Once it starts dismembering society through disinformation, those responsible for unleashing the monster may be inclined to start pulling some of the bigger plugs—not that this will do much good, since significant and potentially irreversible harm will have already been done. But, of course, all of this remains to be seen, and thankfully I won't be around to see it.

Return to Random Thoughts | Grump Central