(Image credit- Tech Times)
Presently, bogus software promoting ChatGPT-based products targets Facebook users.
Through its most recent Q1 2023 Security Reports, which were released on May 3, Meta attested to this cybersecurity risk.
Guy Rosen, Meta’s chief information security officer, asserted that ChatGPT is the “new crypto” from the standpoint of malicious actors.
He said that their security analysts had found about 10 malware families in March alone that were impersonating ChatGPT and other AI applications.
Hackers on Facebook Making Use of ChatGPT’s Popularity
The most recent article in The Sun UK claims that fraudsters employ software that provides ChatGPT-based solutions.
Meta emphasized that this program actually has malware that gives hackers complete access to the devices of its victims.
The IT company claimed that hundreds of fraudulent web domains connected to ChatGPT and other similar AI models had already been disabled.
In order to stop these virus strains from preying on users eager to try OpenAI’s ChatGPT, Meta indicated that their team had looked into the matter and taken precautions.
In his official blog post, Rosen warned that “the generative AI space is rapidly evolving, and bad actors know it.” As a result, everyone should exercise caution.
How Meta Deals with Cybersecurity Problems
According to Meta, Facebook hackers increasingly target users who own companies and use ChatGPT-based tools to entice victims.
The IT company also noted that these malicious users like targeting Facebook users who depend on the social media platform for their jobs.
Also read: An Updated Version Of Windows 11 Brings ChatGPT-Powered Bing AI To The Taskbar
As a result, Meta made the decision to launch a brand-new type of account just for enterprises and employment—the Meta Work accounts.
According to Meta, users can now utilize Facebook’s Business Manager functions without connecting their personal Facebook identities.