Unsplash.Daniil Perunov

Let’s talk about artificial intelligence (again)

I would love to never hear the word ‘ChatGPT’ ever again. Frankly, I am tired of it. Nothing would make me happier than to hang up the conversation surrounding artificial intelligence for good – placing it neatly on the shelf of failed attempts to make the human experience easier. The unfortunate reality is that, in ways, it could make the human experience at least somewhat easier. But does easier mean better? 

One of the foundational pieces of this issue is the question of whether or not artificial intelligence has the power to truly create. I would argue that ‘creation’ is too ambiguous a term. So, I would start with this question instead: Does a computer have the power to make meaning? Meaning-making is a concept in psychology that describes how people arrange or make sense of their own lives, existence, and thoughts. 

Here’s the thing: A.I. can form words and phrases that sound meaningful. But it is not able to form ideas out of thin air. It has a source. That source is everything on the internet. All of it. Real people with real ideas and real words – all being fed into an A.I. generator. People who are not being compensated for their work are being plugged into these systems and reproduced into an entirely separate and far less eloquent essay. Now, what would we call that in the rest of the world? Plagiarism. Plain and simple theft. 

Asbury English professor and published author Erin Penner claims that “it is parasitic on the work we have put into communication: the work to clear up our ideas in trying to spell them out for one another, the work we do in being held accountable for those words, and the work of taking seriously what is written by other people.”

Beyond the issue of A.I. being theft, there is the blaring issue of accuracy and bias. If this A.I. is sourcing from all areas of the internet, trusting its truth is risky. We have established academic content on the internet, and we also have Buzzfeed. A.I. is only as good as the sources it leeches from. Historically, we know that the internet is not always the most accurate place. So why would we trust what these sources tell us? 

Let’s hypothetically say that A.I. was not plagiarism and was perfectly accurate all the time. There are inherent issues with sacrificing the act of writing for the sake of ease. 

Writing is a long and demanding process. No one is arguing against that. There is appeal toward the idea of something doing this process for you. The question we have to ask is whether the process of writing is worth it. In Penner’s words, artificial intelligence is “offering the mere likeness of language–and testing us to see whether we’ll settle for that.” 

When I say that writing is crucial, I am not saying that as a lofty-headed English major. Yes, the process of writing is essential to learning how to think and speak properly. Pulitzer Prize winner David McCullough says it this way: “Writing is thinking. To write well is to think clearly. That’s why it’s so hard.” We need to learn to write well in order to learn how to organize our thoughts and experiences. We need writing to make sense of life and the countless thoughts that bombard us every day. We need to learn to write in order to learn how to better make meaning. 

More than that, though, I am saying that writing is essential because I am a human being who is made for relationships with other human beings. I don’t want to read an email from a boss who couldn’t even take the time to write me an email in their own words. I don’t want to present a computer’s ideas to a professor in a way that gives no insight into my thoughts. I don’t want to read a book or watch a movie created without the story of real people.

People have frequently placed this issue in the box of the academic community, but as the writer and SAG-AFTRA strikes this summer have shown us, this issue is everywhere. As we have established, A.I. can only create as far as the sources that feed it will allow. So, any films made through A.I. are deliberate theft from previous writers and filmmakers. But if A.I. can only generate stories based on past work, the stories we tell will always be stagnant. Stories adapt with life. If we develop a reliance on artificially written stories, we can’t say anything new. We can’t better represent those who have previously been unrepresented – or worse, misrepresented. 

If we want to tell good stories, they have to come from us, not what Penner refers to as a “parasitic echo of what has already been written.” 

If you want a bit of a scare, read up on artificial general intelligence – A.I. that is autonomous and can make decisions. It sounds straight out of a sci-fi movie, but author and researcher Evgeny Morozov, in his New York Times article “The True Threat of Artificial Intelligence,” says that many believe that “the rapidly growing capabilities of OpenAI’s ChatGPT suggest [A.G.I.] emergence is near.” 

Morozov continues, making his most poignant claim of the article. He says that A.I. “will dull the pain of our thorniest problems without fixing them. Neoliberalism has a knack for mobilizing technology to make society’s miseries bearable.” 

This brings me to my primary question surrounding the use of A.I. in all realms. 

At what points do we sacrifice meaning for convenience? 

Relationally speaking, there is no question about the fact that A.I.-generated words take the heart out of our words to one another. Intellectually speaking, there is no question that A.I. will drastically hinder our ability to think and structure our ideas. They take the heart out of our stories. They deliberately steal from our storytellers and spew out a pathetic emulation of meaning. So, what are we as a society saying if we choose the convenience of this falsified meaning over the good and hard work of thinking and choosing the meaning we want to make ourselves? I have had this lingering thought in the back of my mind recently. 

Are we as a society barreling toward ease and comfort in a way that completely eliminates what it means to be human? When will we realize? And when is it going to be too late?