Internet scams can now become much more dangerous, thanks to fraudsters having unfettered access to ChatGPT, the AI-powered chatbot (opens in new tab) that never seems to leave the headlines.
This is according to a report (opens in new tab) published earlier this month by cybersecurity researchers Norton. In it, the company describes three main ways threat actors can exploit ChatGPT to make Internet scams more effective: through deepfake content generation, large-scale phishing, and malware acceleration.
Norton also argues that the ability to generate “high-quality or mass-scale disinformation” could help bot farms fuel division more efficiently, making it easy for threat actors to “create distrust and shape narratives in different languages.”
Combat misinformation
Fraudsters looking to manage fake reviews could also have a field day with ChatGPT, they say, generating them en masse and in a variety of tones.
Finally, the already famous chatbot could be used in social media “harassment campaigns” to silence or bully people, says Norton, adding that the consequences could be “horrifying”:
Hackers can also use ChatGPT in phishing campaigns, which in many cases are carried out by non-native English language attackers, allowing victims to recognize an obvious scam attempt in the case of poor spelling and grammar. ChatGPT allowed threat actors to create highly persuasive emails at scale.
Finally, malware encoding may no longer be the preserve of seasoned hackers. “With the right prompt, novice malware authors can describe what they want to do and get working code snippets,” the researchers said.
Consequently, we could witness an increase in the number and sophistication of malware, they say. And with ChatGPT’s ability to quickly and easily “translate” source code into less common programming languages, more malware could get past antivirus solutions.
As with any new tool before it, ChatGPT will most likely also be used by scammers and hackers to achieve their goals. It is up to users, as well as the wider cybersecurity community, to provide the answers to these new threats, the researchers conclude.