Home » News » Deepfake Voice Misuse Cases Using ElevenLabs AI Beta Platform Are Increasingly Common

Deepfake Voice Misuse Cases Using ElevenLabs AI Beta Platform Are Increasingly Common

(Image Credit Google)
This week, ElevenLabs noticed the emergence of a deepfake simulator after the business noticed an increase in the number of instances of the AI tool being misused. Some users go beyond the point of using it for entertainment. According to reports, some people have since used the model to create homophobic, racist, and transphobic content. As per reports, on January 23, ElevenLabs introduced its AI platform for the first time in beta. prior to this year. The business declared that it would introduce Prime Voice AI. The business eventually discovered that more people were abusing the deepfake voice simulator. The AI tool that enables users to mimic the voices of their favorite celebrities were being misused more frequently on Twitter. It turns out that 4Chan is also experiencing a significant increase in AI usage. Some users posted videos of AI-generated voices that sounded almost exactly like actor voices, according to a different Motherboard report. [caption id="" align="aligncenter" width="875"]A new AI voice tool is already being abused to make deepfake celebrity  audio clips | Engadget Image credit- Engadget[/caption] There are numerous voice recordings that seem to be an exact match to Emma Watson's. While it's entertaining to hear a voice with such a similar quality from a complete stranger, some are unethically abusing the AI platform for their own gain. According to Engadget, some people have abused the deepfake voice simulator to irritate other channel users. They specifically speak homophobic or racist remarks to someone using the Prime Voice AI. It has not yet been revealed whether all of the audio clips on 4Chan used the company's AI voice tool, despite ElevenLab believing that many of the users were abusing their beta platform. A link that points to the platform of ElevenLabs was present in a "wide collection" of these files, though. ElevenLabs made the decision to ask users for feedback on its deepfake voice software in order to stop the widespread abuse of the AI platform. [caption id="" align="aligncenter" width="600"]Increasing Number of Deepfake Voice Misuse Cases Attributed to ElevenLabs  AI Beta Platform | Tech Times Image credit- Tech Times[/caption] The business is working on a solution to prevent users from abusing its AI platform. As a result, it might enhance account verification for users to reduce the number of people who might be able to use the voice simulator. Before using the AI tool, they might additionally ask users to add their ID or any bank information. https://www.gadgetany.com/news/bruce-willis-agent-disputes-assertion-of-selling-his-face-to-deepfake-ai/ ElevenLabs should insist on receiving a sample of their cloning requests for the voices they wish to duplicate. It should come as no surprise that deepfake technology can be employed or misused to some degree. Companies still have the authority to prevent users from abusing the software.

By Monica Green

I am specialised in latest tech and tech discoveries.

RELATED NEWS

In the ever-changing world of technology and retai...

news-extra-space

In a bid to capture the attention of users and dri...

news-extra-space

Apple is preparing for a game-changing move with i...

news-extra-space

Google has been making huge headways in artificial...

news-extra-space

Elon Musk's artificial intelligence firm, xAI, is ...

news-extra-space

In a digital showdown that has captured the attent...

news-extra-space
2
3
4
5
6
7
8
9
10