Ask an Expert: What Are the Ethical Implications of ChatGPT?
If science fiction can be believed, the robots are coming for all of us — but the latest advances in artificial intelligence (AI) technology are starting to make fiction seem like reality. With the recent headlines about rogue chatbots and universities using ChatGPT to generate condolence letters, Cal Poly News turned to English Professor Deb Donig, cofounder of the Ethical Technology Initiative at Cal Poly, to give her insight into the ethical implications of AI, specifically ChatGPT.
What is ChatGPT?
ChatGPT is what's called an LLM, or large language model. The GPT stands for “Generative Pre-Trained Transformers,” and is a kind of AI that can generate natural language responses to human input. We've actually been dealing with large language models for a very long time. Look back on the early days of having Siri on your iPhone — you would have to pick up your iPhone and say something in a kind of caveman-sounding talk, and Siri would process it.
The other large language model that many of us have been living with for a very long time is predictive text. If I’m typing an email in Google, Google will predict what it thinks I will say next. How does it do that? Gmail reads hundreds of millions of pieces of mail, so it can predict what word will likely follow. That kind of predictive text has been with us for a very long time. There are really serious questions about how these kinds of large language models are changing how we think and how we write.
But there are some major differences between these other LLMs and ChatGPT that make me think that ChatGPT represents a fundamental shift in our relationship to and understanding of it. We’d never mistake Siri or predictive text suggestions for a human interaction. There’s a danger here, because we haven’t had the time to develop new digital literacy skills to catch up with the new technology.
This form of AI is evolving too quickly. Not having the digital literacy to understand whether we’re talking to machines or humans can be disastrous. The possibilities for exploitation are severe and large-scale, and I don’t think we’re prepared for them.
What does ChatGPT do?
ChatGPT has access to all open-source information on the web, which includes everything that is not protected by distributive intellectual property copyrights. It doesn't include things that people have not put into the web, but OpenAI’s version can access everything that's on the internet, and through that data aggregation, it creates these large language models.
When you type in a question, it accumulates all of the relevant information and aligns that information through the principle of what I want to call contiguity: the prediction of what the next likely thing is going to be.
It’s fairly easy for ChatGPT to consume all of that information, get asked a question, and write out predictable, contiguous thoughts in something that looks like human speech by being trained on, and being able to assimilate, lots of different, separate pieces of writing and language together into something that looks coherent.
What can’t ChatGPT do?
What ChatGPT does not have access to is one important dimension of what we think of as human creativity, which is the ability to create something new.
My partner and I had a joke a couple of months ago, when ChatGPT came out. He would say, “Write Deb a poem in the style of a Grecian ode,” and it would give me a perfectly crafted poem in the style of a Grecian ode. I would say, “Write him a love song in the style of the Temptations,” and it would give him a perfectly crafted, Temptations-style song.
But the Temptations weren’t great because they copied what somebody else did. The Temptations were great because they added something new to what we thought of as music. They innovated. They made music that sounded different from what came before. And that’s what we think of as genius. That invention, that newness? That’s what ChatGPT cannot do.
Do we need to worry about AI applications like this taking our jobs?
These large language models and everything that works like a large language model, including these AI art generators, take things from the past and remix them.
It might be possible at a future time for AI to develop something new, but it doesn’t seem likely to me. I think there is a real concern over ChatGPT taking human jobs, and I don’t want to dismiss that concern, but there are certain types of writing that are repetitive enough that asking human beings to do it when there is a generator that can do it doesn’t seem worthwhile.
Some of that’s already with us, by the way — just look at the emails from politicians around election season. All of us have figured out that these types of emails are, on some level, automated. But that wasn’t always the case, and when these types of emails started coming out, people were genuinely confused. We’re going to build up some digital literacy.
What are some other potential implications for these types of AI content generators?
One of the first things is that we will start to devalue the kinds of interactions we have with writing.
What happens if you just assume that most of what you see on social media is written by a bot? None of us are going be all that interested in responding to it. A large amount of our engagement and interactions online right now take place through a text medium. Will that change? We don't know.
Additionally, I'm already seeing people who write marketing material being let go from their jobs, and then being rehired by the same people who have let them go at a much lower cost to edit ChatGPT-generated writing based on the writing that this particular writer has already generated.
Right now, under current intellectual property law, this is perfectly legitimate. But intellectual property law was developed at a time where these kinds of automations were not possible. We might be wise to potentially start to rethink intellectual property law in this particular moment.
Finally, there is the question of the devaluing of writing overall. When ChatGPT can generate writing, how many people want to go to get a degree in literature or in journalism? How many universities are going to continue to employ professors of English literature or of writing, or have writing requirements when there is no longer a value to the most basic forms of writing?
I think that there will always be a high value placed on good, important, innovative writing — or at least I hope so. But I wonder about the devaluing of writing, broadly speaking, when there's no market value for the basic forms of writing. Artistic creators of writing have to start somewhere.
Are there reasons to be optimistic?
We might remember infamous bereavement letter sent out to the Vanderbilt University community in the wake of the shooting at Michigan State University. That letter was generated by AI. People were upset, because we tend to believe that bereavement letters should not be form letters — they should reflect and document genuine human feelings like sorrow, compassion, and grief. But I bet that many of us have a hard time writing letters that express difficult emotions. What if we thought about a ChatGPT generated response not as a substitute for human emotions but rather as a springboard that might allow us to move forward with complex emotions?
I don't think that it has to be either-or: either regenerate everything from scratch or all of our writing is devalued. There can be a “Yes, and...” here, as they say in improv. What if we could take this tool and think about the possibilities?
I don't want to ignore the negative consequences — we should grapple with those — but I also want to explore the possibilities. What if I am writing a legal analysis, as I currently am, and I could just ask ChatGPT to go through each of the individual legal arguments I’m looking at?
If I had that basic literature review, which actually doesn't require intellectual labor in terms of abstract thinking or creating new ideas, I'd have more time potentially to expand my reach and capacity.
ChatGPT and large language models cannot ideate. They take existing thought and they remix that thought. If I define writing as the production of ideas, not just as a report or record of ideas, then one of the values of writing is in actually producing ideas.
I am not so arrogant or pessimistic as to think that we have already thought of all of the great ideas out there. I think that there are new ideas out there waiting for people to invent, and we won't get there if we just remix the past.
We have to produce new thoughts and transmit them somehow. If we think of writing in those terms, then writing is as important as it's always been to our culture.
Lead image: This image was generated by the AI platform Dall-E with the prompt, “Photo of a robot sitting in a coffee shop writing on a laptop”.
Want more Learn by Doing stories in your life? Sign up for our monthly newsletter, the Cal Poly News Recap!