Faculty Face-Off is a series in which two faculty members of the Fox School share different points of view on a particular issue. This time, let’s talk ChatGPT with Debbi Casey, associate professor of management, and Alan Karr, research professor of statistics, operations and data science. They share their thoughts, fears and excitement—and whether this artificial intelligence (AI) tool may come back to haunt us.
By now, nearly everyone has heard of ChatGPT. For those unfamiliar, let’s allow it to introduce itself.
“I am ChatGPT, a large language model developed by OpenAI,” says the platform when prompted. “I am designed to understand and generate human-like text in response to natural language inputs. I have been trained on a large corpus of text data using deep learning techniques, allowing me to generate responses that are contextually relevant and grammatically correct.”
ChatGPT, which launched in November 2022, has skyrocketed in popularity. In February 2023, it hit a record-breaking 100 million users per month. For comparison, it took TikTok nine months to grow that quickly, according to Reuters—and Instagram took two and half years.
“We can’t say that we understand the technological details of how it works, because the algorithms are not publicly available,” says Karr. “But to both locate and present information in polished English, it is an amazing achievement.”
This moment, however, has many potential outcomes.
AI-powered opportunities
Casey was originally very hesitant about ChatGPT, but a Fox colleague changed her tune. Now, she considers it to be an opportunity to democratize education and support.
“It’s like having a kick start,” says Casey on using the AI tool to move someone from a blank page to a working draft. “Once that pressure’s off (of a first draft), you can bring that education, those communication skills, those lessons, to hone your work.”
She also compares using ChatGPT to a CEO or leader having support staff, like executive assistants, speech writers or policy advisers. For example, no one thinks that a U.S. president schedules their own meetings or writes their own State of the Union, she says, yet the audience is OK with that.
Instead, ChatGPT, which is currently a free platform, can give anyone that extra help.
“In a lot of ways, this is democratizing opportunities for people from all different socio-economic backgrounds,” says Casey. “Normally, you don’t get staff until you are higher up in the power chain. But now maybe we all just got a staff.”
Karr also sees it as a way to free up expert time. For example, with some basic parameters, ChatGPT can generate the first draft of a will, which a legal professional could review and edit as needed.
“Then you’re not paying somebody $200 an hour to produce a draft,” says Karr. “If ChatGPT is regarded as a first step, it could make people’s lives simpler and enable people to concentrate where the human intervention is really needed.”
Fact or fiction
While Casey, an employment lawyer by trade, agrees with Karr that a ChatGPT-generated document is a great start, she cautions against leaning on it too heavily. If it were to write an arbitration agreement for example, Casey says, “It would get most of it mostly right. But it would not comply with the laws.”
Karr concurs, saying that the accuracy of ChatGPT has been called into question by everyone from software engineers to the media.
“It can put these beautiful sentences together, but are the facts correct?” asks Karr. “My software engineer colleagues fear it for this exact reason. People may use it to produce things like critical safety applications, but without the technical background, they will be unable to judge whether it implements what they want it to.”
ChatGPT admits this itself, as well. A warning appears on every new chat: “May occasionally generate incorrect information.”
Privacy implications
It’s possible that a future employer might assess potential job candidates on how good of an AI prompt they can write. It’s a skillset that hasn’t been experienced on a large scale yet but is quickly picking up speed.
Millions of users are writing prompts—and ChatGPT is generating responses—with information that could seem harmless but have vast implications. For example, an employee may input information about a company’s trade secret or have ChatGPT generate a piece of code that a company uses for a proprietary product. Who will see that information or how will it impact the product in the future?
“For more than 20 years, I’ve worked with federal agencies to protect their confidentiality and privacy of their data,” says Karr. “I think ChatGPT could be a privacy invader far beyond what we’ve ever seen.”
There are personal implications, too. As a society, we have gotten comfortable sharing pieces of information—our birth dates, addresses, partners’ or children’s names, education background—that could be used to be compiled into a full profile. With an AI-powered algorithm that is constantly receiving and learning from new information, there’s no telling where our data may end up.
Karr’s professional opinion: “The privacy implications of ChatGPT are truly frightening.”
Messing with time
Casey offers an interesting take on what ChatGPT could do to society.
“As somebody who loves labor history, to me, it looks like ChatGPT is to the knowledge economy what the industrial revolution was to the manufacturing economy.”
She explains that when the industrial revolution began, the prevailing sentiment was that machines like dishwashers and laundry machines were going to free up our time. Leisure would follow.
“Well, I don’t know about you, but that hasn’t happened for me,” says Casey.
Instead, machines sped up the pace.
“That human-machine interaction has messed with our human perceptions of time. Now, we all have time anxiety; we’re all trying to squish in another email. And I do see ChatGPT making this worse.”
Her advice? Take back our time, take back our humanity and put up some parameters. Countries like France have already taken some of these steps, by passing laws making it illegal to work on the weekends. Yet legislating this problem at a government level, Karr puts it bluntly, is a “loser.”
“We can’t keep up with the speed of technological advancement,” he says.
Casey concurs but offers an alternative: organizations should take the lead.
“Email, for example, we often think is innocuous. But we have all kinds of policies about using email in the workplace—against workplace harassment, sharing confidential information or protecting private data and safety.”
Instead of relying on government intervention in this incredibly fast-paced technological climate, companies could set policies that encourage workers to schedule emails within working hours or outline the instances in which using ChatGPT is OK versus when it’s not.
The conversation ends with a question that had no answer: If ChatGPT really will disrupt the knowledge economy, what comes next?
“I have no idea,” says Casey.
“I think that’s why human guidance is so critical. We can’t not talk about it with our students, our employees or our managers. We can’t ignore this.”
Read more from the 2023 Moments Issue of Fox Focus.