YouTube has seen a recent increase in videos with malicious links to infostealers in their descriptions, with many using AI-generated personas to fool viewers into trusting them.
This is reported by cyber intelligence agency CloudSEK (opens in new tab) that since November 2022 there has been a massive 200-300% increase in content uploaded to the video hosting website, tricking viewers into installing known malware like Vidar, RedLine, and Raccoon.
The videos pretend to be tutorials showing how to download free pirated copies of popular paid design software, such as Adobe Photoshop, Premiere Pro, Autodesk 3ds Max, and AutoCAD.
Comes across as reliable
The how-to videos have become increasingly sophisticated, from screen captures and audio-only walkthroughs, to now using AI to create a realistic representation of a person guiding the viewer through the process, all in an effort to appear more trustworthy.
CloudSEK notes that AI-generated videos in general are on the rise, used for legitimate educational, recruiting and promotional purposes, but now also for nefarious purposes.
Infostealers, as the name suggests, penetrate a user’s system and steal valuable personal information, such as passwords and payment details, and are distributed through malicious downloads and links, such as those in the description of videos such as in this case. This data is then uploaded to the threat actor’s server.
CloudSEK addresses the fact that YouTube, with 2.5 billion monthly users, is a prime target for adversaries who, in order to evade the platform’s automated content-checking process, try to trick the algorithm in various ways.
These include using region-specific tags, adding fake comments to make videos appear legit, and simply flooding the platform with multiple videos to compensate for removed and banned videos. CloudSEK found that 5-10 of these malicious videos are uploaded every hour.
To optimize for SEO, many hidden links are also used, as well as random keywords in different languages, so that the YouTube algorithm recommends them in the end.
To obscure the malicious nature of the links, shortening links such as bit.ly are also used, as well as links to file hosting services such as MediaFire.
“The threat of infostealers is rapidly evolving and becoming more sophisticated,” said CloudSEK researcher Pavan Karthick. “In a worrying trend, these attackers are now using AI-generated videos to increase their reach, and YouTube has become a convenient platform for their distribution.”
CloudSEK suggests that “traditional string-based rules will not be effective against malware that dynamically generates strings and/or uses encrypted strings.”
Instead, it recommends that companies take a more manual approach, closely monitoring threat actors’ tactics and techniques to properly identify threats.
In addition, CloudSEK suggests conducting awareness campaigns, sharing simple advice such as not clicking unknown links and using multi-factor authentication to secure accounts, ideally with an authenticator app.