Dec 28, 2022
Anyone perusing this site has probably also read more than a few articles about ChatGPT, the latest “AI writer” that can turn user prompts into text that faithfully mimics human writing. I would venture to guess many readers here have even tried the tool for themselves (it’s free to experiment with if you haven’t). Chat GPT has dominated the conversation in tech over the last few weeks. It has been hard to escape, frankly.
Among the countless think pieces written about whether ChatGPT will spell the death of the college essay or usher in the end of creativity and critical thinking as we know them have been plenty of articles focused on cybersecurity specifically. Now that AI can instantaneously produce endless amounts of writing for almost any purpose, there are serious implications, both good and bad, for the future of digital defense.
Of course, the bad would seem to seriously outweigh the good (more on that soon). But amidst all the doom and gloom thrown at ChatGPT, it’s important to also acknowledge how this technology could be an asset to developers, security teams, or end users. Let’s look at it from three angles.
Cybersecurity suffers from a serious information deficiency. New attacks, techniques, and targets appear all the time, requiring the broad security community to keep constantly updated. On the other hand, average users need better information about cyber safety best practices, especially considering that years of consistent training and warnings haven’t cured deep-seated problems like password recycling. In both of these cases and others, I can see ChatGPT or a similar tool being extremely helpful for quickly yet effectively encapsulating information.
Of course, documenting cybersecurity hasn’t exactly been its biggest problem, and I question how much an AI writer can actually do to prevent or lessen attacks. Nonetheless, knowledge is power in cybersecurity but the scale of the issue stands in the way, so I can see automated writers playing a role in a host of different security tools, defensive techniques, and training strategies. They can (and arguably must) be a force for good.
Almost the minute ChatGPT went live, the naysayers and doomsday prognosticators started to come out of the woodwork. Which is neither surprising nor troubling. ChatGPT is just the latest example of how artificial intelligence will transform the world in ways that we can’t predict, will struggle to control, and in some cases would never want.
Cybersecurity is a prime example. ChatGPT can generate passable (if not perfect) code just as it can prose. This could be a boon for developers of all kinds – including those that develop malware and other attacks. What’s to stop a hacker from using ChatGPT to expedite development and iterate endlessly, flooding the landscape with new threats? Similarly, why write your own phishing emails when ChatGPT, trained on countless past phishing emails, can generate thousands of them in seconds?
Automated writers lower the barrier to entering cybercrime while helping established criminals and gangs scale their efforts. More alarming, new technology always has unexpected, often unintended consequences, meaning that ChatGPT is sure to surprise us with how it gets weaponized, which is to say that the worst is yet to come.
To emphasize my previous point, let me outline a scenario I haven’t yet seen addressed in the ChatGPT conversation. Business email compromise (BEC) attacks are where hackers personalize phishing emails, texts, or other communications with personal information to make them seem like they are coming from the recipient's boss, close colleague, or another trusted source. They also contain careful social engineering to inspire the recipient to act without considering risk or applying good judgment. They are basically phishing attacks carefully calibrated to succeed. Back in June, Wired wrote that they were “poised to eclipse ransomware” because they have proven so lucrative and also so resistant to security measures.
The saving grace was that BEC messages took time. Someone had to first do research on the targets and then turn that into fine-tuned copy. Therefore, they were hard to scale and difficult to get just right (many of these attacks still failed). There was a difficult if not definitive upper limit.
From my perspective, ChatGPT obliterates that obstacle. Imagine if an attacker trained automation to comb LinkedIn for data about people’s professional relationships, then fed that data into ChatGPT to create convincing BEC emails customized for hundreds or thousands of different recipients. If we can automate both the research and the writing parts, and do both on not just a massive scale but with uncanny precision, hackers can scale BEC campaigns to any size.
And then what? Will every email seem suspect? The cloud of doubt hanging over the authenticity of any piece of information or string of communication (did this come from someone real?) may prove as much or more disruptive than the attacks themselves. I’m just speculating. These doomsday scenarios, like so many others, may never materialize...Or BEC attacks could prove to be the least of our concerns.
That puts it on us – probably most people reading this site – to somehow ensure the good outweighs the rest.
The AI writer everyone's talking about could transform cybersecurity as with so much else. Here are three possible outcomes: good, bad, and ugly.
Microsoft Windows Contacts (VCF/Contact/LDAP) syslink control href attribute escape vulnerability (CVE-2022-44666) (0day)j00sean (https://twitter.com/j00sean) July 11, 2023
CVE-2021-38294: Apache Storm Nimbus Command InjectionZeyad Abdelazim June 20, 2023
CVE-2023-21931 & CVE-2023-21839 RCE via post-deserializationMohammad Hussam Alzeyyat June 19, 2023
Have you missed them? The new reports feature is here!Noa Machter May 14, 2023
CVE-2021-45456 Apache Kylin RCE ExploitMohammad Hussam Alzeyyat April 30, 2023