So you’re probably crazy about ChatGPT, which is understandable.
Since the release last year of the artificial intelligence lab OpenAI, it has sparked a firestorm of debate about how these big language models will be some kind of universal disruptor, capable of doing everything from writing essays for students, pumping out SEO articles for publications, even dethroning Google as the world’s most popular search engine. It even threatens creatives and may replace screenwriters, novelists and musicians.
We’ve already seen some of this play out. ChatGPT has been credited as an author in at least one pre-print study and even news articles (albeit with a wink). A new preprint recently discovered that the bot can even create summaries of studies so convincing that even scientists are fooled. Many fear it will leave a carnage of journalists and marketing jobs in its wake, and a whole lot of headaches for teachers and professors trying to figure out whether or not their students wrote their assignments.
But the truth is of course one bit more complicated than that. It’s easy to look at a powerful chatbot like ChatGPT and automatically assume that it will turn everything upside down. Hell, it very well could, but right now people are blowing the chatbot’s capabilities completely out of proportion. By doing so, it can give these advanced chatbots more credibility than they deserve, creating a very dangerous situation in the process.
“If people just use [ChatGPT] to try to surface information, the concern is that it can generate completely credible, accurate-sounding nonsense.”
— Irina Raicu, University of Santa Clara
“Every time one of these new language models comes out, there’s a lot of exaggeration about the potential impact they’re going to have,” Sarah Kreps, the director of the Cornell Tech Policy Institute at Cornell University, told The Daily Beast. “I think what we’ve seen so far is that the exaggeration hasn’t caught up with reality.”
And the fervor behind ChatGPT and similar programs may just be that we haven’t done much to improve our own standards of what good writing looks like. Much has been written about the bot used by students to create essays. Educators have started sounding the alarm about these cases of ‘AIgiarism’. However, these examples are perhaps more a condemnation of the education system’s focus on boilerplate, five-paragraph essays than anything else. After all, if the way we teach students how to write is so simple that a bot can learn it, it might not really be a good way to write.
“We trained students to write like algorithms,” Irina Raicu, director of the Internet Ethics Program at Santa Clara University, told The Daily Beast. “Maybe it’s forcing instructors to go back to rethinking how they teach writing and what their expectations are for writing.”
Raicu also believes that many of the claims made by these tech companies and the media are overhyped, especially when it comes to using these bots to replace tools like search engines. The problem with using a chatbot like ChatGPT as a search engine – or really anything – are the same ones we see time and time again when it comes to AI: bias and misinformation.
“If people just use [ChatGPT] to try to surface information, the concern is that it can generate completely credible, accurate-sounding bullshit,” she said. Look no further than Meta’s attempt to create an AI for academic studies and papers, which resulted in racist, sexist, discriminatory and bogus studies.
All that bombastic noise creates undeserved credibility. Your typical user who is not online or not connected to the AI world might believe that these chatbots will always be accurate and give the correct answers – when we have seen time and time again that the biases these bots exhibit can result in real-world harm, such as when a bot used by the US court for risk assessment was found to be heavily biased against black people.
“I keep thinking of the old Facebook slogan of act fast and break things. Many companies have moved away from that, but now I think we’re going to start breaking things again.”
— Irina Raiku
While bots like ChatGPT may be able to be refined and improved over time, those biases will always remain because of how these bots are trained. They use datasets compiled from language sourced from real people who are famously biased.
“The improvements are not linear,” Kreps explains. “Because they’re trained in language that itself has biases and errors, you just replicate those same biases and errors in the output.”
The technology just isn’t there yet. However, Raicu believes we may be at a turning point with AI, similar to where we were when Facebook came on the scene in the mid-2000s.
At the time, social media was still looked down upon as a trendy flash-in-the-pan fad that was likely to die. That same company that Mark Zuckerberg started at Harvard is one of the richest companies in the world and is literally trying to build its own digital universe. We may be in a similar situation with AI and companies like OpenAI.
“I keep thinking about the old Facebook slogan of act fast and break things,” Raicu said. “A lot of companies have moved away from that, but now I think we’re going to break things again.”
That is, there is no place for AIs like ChatGPT. Rather than seeing them as complete replacements for humans or actual creative endeavors, both Raicu and Kreps say they can be good tools to support them. You could use a chatbot to help you come up with an outline for a paper, or to get inspiration for a story. You could also use it for low-effort writing and ideation.
“These tools are really useful for things like Airbnb profiles or Amazon reviews,” Kreps said. “Things like that are pretty imperfect anyway. But I think where a higher degree of credibility is required, these language models still leave something to be desired.”
Solving these problems is incredibly complex (to say the least), but can often come down to simply better education. Right now, so many of these emerging technologies are black boxes, where we only view them in terms of what they produce, not how they work and why. That means the companies that develop and promote these bots have a duty to be as clear and transparent as possible about how these AIs were developed and what their limitations are.
“The idea that you could replace humans is still imaginative to me.”
— Sarah Kreps, Cornell University
Of course, that’s easier said than done when sensational headlines about how ChatGPT will change everything get most of the attention and drive discussion on social media. Raicu explained that journalists and educators have a great responsibility in properly communicating about AIs like ChatGPT.
“I think journalists have a big role to play in not exaggerating things and not making claims about it that are not true,” Raicu said. She later added that “it should be published in an easily digestible, understandable way for people who are not technologists.”
So while these large language models are impressive on the surface, they don’t really hold up under close scrutiny. Try it yourself now and ask ChatGPT to write you an essay or a story. Do it a few times. You’ll soon find that the writing is sloppy, full of factual errors and occasional nonsense.
And what’s worse, it is dull. The syntax is simple. There is no style or flair. When Edward Tian developed GPTZero, an app that can tell the difference between ChatGPT and human-written text, one of his parameters was simply how complex and interesting a sentence is. The simpler the word choices were, the more likely a bot was to write it.
Despite all the buzz and hype, ChatGPT cannot replace the genuine article. In fact, it could never be. There will always be a need for a real human being of flesh and blood.
“It still has some way to go before they can fully simulate a human mind, writing, handicrafts and fact-checking,” Kreps said. “The idea that you could replace humans is still imaginative to me.”