Meta Security Engineers Discover Malware Posing as ChatGPT to Compromise Accounts
To improve your local-language experience, sometimes we employ an auto-translation plugin. Please note auto-translation may not be accurate, so read original article for precise information.
Malware poses as generative AI tools like ChatGPT to compromise user accounts, according to security engineers and researchers at Meta.
Security engineers and researchers at Meta have found that malware operators are using generative AI tools as their latest ploy to spread malicious software.
With generative AI being a hot topic, malware campaigns have recently taken advantage of people’s interest in OpenAI’s ChatGPT, using it to lure people into installing malware. Meta security engineers Duc H. Nguyen and Ryan Victory wrote in a blog post that the ultimate goal of these campaigns is to compromise businesses with access to ad accounts across the internet.
Malware operators are targeting various platforms across the internet, including file-sharing services Dropbox, Google Drive, Mega, MediaFire, Discord, Atlassian’s Trello, Microsoft OneDrive, and iCloud to host malware pretending to provide AI functionality.
Since March 2023, several malware strains have been discovered by researchers that exploit ChatGPT and similar topics to gain access to online accounts. For instance, malicious browser extensions pretending to provide ChatGPT-related features were developed and made available in official web stores by threat actors.
Using social media and sponsored search results, malware operators advertised these malicious browser extensions to deceive users into installing malware. To evade detection by official web stores, some of these extensions even had functional ChatGPT features.
Meta security engineers said that they had prevented the sharing of over 1,000 ChatGPT-themed malicious links on the company’s platforms and have shared this information with industry peers to take necessary measures.
As with previous malware attacks like Ducktail, the perpetrators behind these new campaigns have had to adjust their strategies quickly in response to blocking and public reporting; they are resorting to methods such as cloaking to evade detection from automated ad review systems and utilizing popular marketing tools, such as link-shorteners, to conceal the true purpose of their links.
They are also changing their tactics by focusing on other popular themes like Google’s Bard and TikTok marketing support. Some of these campaigns have shifted their focus to smaller platforms, such as Buy Me a Coffee, as a way to disseminate and distribute malicious content after larger platforms had taken action against them.
With the ongoing hype surrounding generative AI, users should be wary of unsolicited links or downloads, particularly ChatGPT-related applications that may appear on browser web stores or sidebars.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.