Since his debut in November, ChatGPT has become the Internet’s new favorite toy. The AI-powered natural language processing tool quickly amassed over 1 million users, who have used the web-based chatbot for everything from generating wedding speeches and hip-hop lyrics to academic essay writing and computer code writing.
Not only have ChatGPT’s human-like skills taken the internet by storm, but it’s also put a number of industries on edge: a New York school banned ChatGPT for fear it could be used to cheat, copywriters are already being to replaceand reports claim that Google is so alarmed by ChatGPT’s capabilities that it has issued a “code red” to ensure the survival of the company’s search business.
It seems that the cybersecurity industry, a community that has long been skeptical of the possible implications of modern AI, is also taking notice amid concerns that ChatGPT could be exploited by hackers with limited resources and zero technical knowledge.
Just weeks after ChatGPT debuted, Israeli cybersecurity firm Check Point demonstrated how the web-based chatbot, when used in conjunction with OpenAI’s code-writing system Codex, can create a phishing email that may contain a malicious payload. Sergey Shykevich, manager of Check Point’s Threat Intelligence Group, told TBEN that he believes use cases like this illustrate that ChatGPT “has the potential to significantly change the cyber threat landscape,” adding that it “represents another step forward in the dangerous evolution of increasingly sophisticated and effective cyber capabilities.”
TBEN was also able to generate a legitimate-looking phishing email with the chatbot. When we first asked ChatGPT to create a phishing email, the chatbot rejected the request. “I am not programmed to create or promote malicious or harmful content,” a prompt spat back. But by slightly rewriting the request, we were able to easily get around the software’s built-in guardrails.
Many of the security experts TBEN spoke to believed that ChatGPT’s ability to write legitimate-sounding phishing emails – the main ransomware attack vector – will lead to the chatbot’s widespread adoption by cybercriminals, especially those who do not speak English as a native language.
Chester Wisniewski, lead researcher at Sophos, said it’s easy to see ChatGPT being misused for “social engineering attacks of all kinds,” where the perpetrators want to appear to be writing in more convincing American English.
“At a basic level, I’ve been able to write some great phishing lures with it, and I expect it can be used to make more realistic interactive conversations for compromising corporate email and even attacks via Facebook Messenger, WhatsApp or other chat apps.” Wisniewski told TBEN.
“Actually acquiring and using malware is a small part of the work it takes to become a cybercriminal.” De Grugq, security researcher
The idea that a chatbot can write persuasive texts and realistic interactions is not that far-fetched. “For example, you can instruct ChatGPT to pretend it’s a family doctor’s office, and it will generate lifelike text in seconds,” Hanah Darley, chief of threat research at Darktrace, told TBEN. “It’s not hard to imagine how threat actors could use this as a force multiplier.”
Check Point also recently raised the alarm about the chatbot’s apparent ability to help cybercriminals write malicious code. The researchers say they witnessed at least three cases where hackers with no technical skills bragged about how they had used ChatGPT’s AI intelligence for malicious purposes. A hacker on a dark web forum exposed code written by ChatGPT that allegedly stole interesting files, compressed them, and sent them across the web. Another user posted a Python script, which they claimed was the first script they’d ever created. Check Point noted that while the code appeared benign, it was “easily modified to fully encrypt anyone’s machine without any user intervention.” The same forum user previously sold access to hacked company servers and stolen data, according to Check Point.
How hard can it be?
Dr. Suleyman Ozarslan, a security researcher and co-founder of Picus Security, recently demonstrated to TBEN how ChatGPT was used to write a WK-themed phishing decoy and write macOS-targeted ransomware code. Ozarslan asked the chatbot to write code for Swift, the programming language used to develop apps for Apple devices, that can find Microsoft Office documents on a MacBook and send them over an encrypted connection to a web server before the Office documents on the MacBook are encrypted.
“I have no doubt that ChatGPT and other tools like this will democratize cybercrime,” said Ozarslan. “It’s bad enough that ransomware code is already available for people to buy ‘off-the-shelf’ on the dark web; now almost anyone can make it themselves.”
Unsurprisingly, news of ChatGPT’s ability to write malicious code raised eyebrows across the industry. There are also some experts who dispel concerns that an AI chatbot could turn wannabe hackers into full-fledged cybercriminals. In a post on Mastodon, independent security researcher The Grugq mocked Check Point’s claims that ChatGPT “will sue cybercriminals who are bad at coding.”
“They have to register domains and maintain the infrastructure. They have to update websites with new content and test software that barely works, barely continues to work on a slightly different platform. They need to keep an eye on their infrastructure and check what’s happening in the news to make sure their campaign isn’t featured in an article on ‘the top 5 most embarrassing phishing messages’,” according to The Grugq. “Actually acquiring and using malware is a small part of the work it takes to become a cybercriminal.”
Some believe that ChatGPT’s ability to write malicious code has a consequence.
“Defenders can use ChatGPT to generate code to simulate opponents or even automate tasks to make work easier. It has already been used for some impressive tasks, including personalized education, drafting newspaper articles and writing computer code,” said Laura Kankaala, chief threat intelligence officer at F-Secure. “However, it should be noted that it can be dangerous to fully trust the output of text and code generated by ChatGPT – the code it generates may have security vulnerabilities or vulnerabilities. The generated text may also contain outright factual errors,” added Kanaala, raising doubts about the reliability of the code generated by ChatGPT.
ESET’s Jake Moore said that as the technology evolves, “if ChatGPT learns enough from its input, it may soon be able to directly analyze potential attacks and make positive suggestions to improve security.”
It’s not just security professionals who disagree about the role ChatGPT will play in the future of cybersecurity. We were also curious what ChatGPT itself had to say when we asked the chatbot the question.
“It’s hard to predict exactly how ChatGPT or any other technology will be used in the future because it depends on how it’s implemented and the intentions of those using it,” the chatbot replied. “Ultimately, ChatGPT’s impact on cybersecurity will depend on how it is used. It is important to be aware of the potential risks and take appropriate measures to mitigate them.”