It was only a matter of time before hackers took advantage of ChatGPT’s immense popularity to push malware (opens in new tab) and steal sensitive personal data – and several security companies have now noticed this.
For the uninitiated, OpenAI’s ChatGPT is an AI-powered chatbot whose popularity has skyrocketed in recent months.
The novelty of its output, plus Microsoft’s eagerness to invest in the technology, made it the most sought-after technology online, with more than 100 million users in just two months (November 2022 to January 2023), according to BleepingComputer (opens in new tab).
The demand inevitably led to monetization of the service. Those who want uninterrupted access to the platform can get it for $20 per month.
According to BleepingComputer, cybersecurity professionals have found several hacker campaigns promising free access. Of course, these are cases of “if it sounds too good to be true, it probably is”, and you should be wary of that.
In one such example, threat actors pushed Redline, a well-known infostealer capable of grabbing passwords and credit card information stored in web browsers, taking screenshots, exfiltrating files, and more.
To deliver the malware, they created a fake website promoting uninterrupted access to ChatGPT, and even created a Facebook page to promote the website. Other hackers tried to distribute the Aurora stealer.
Fake ChatGPT apps are also distributed via Google Play and other third party Android app stores. It goes without saying that users cannot access the chatbot, only unknown forms of malware. So far, there are dozens of such apps: Cyble researchers found more than 50.
For the avoidance of doubt, the only way to access ChatGPT is through the official website – https://chat.openai.com/ – and OpenAI’s APIs. All other “alternatives” are not credible and may affect your smartphone’s security and privacy.
Via: Bleeping Computer (opens in new tab)