Image Credit: Crast.net
Elon Musk and the EU are at odds over the Twitter CEO’s proposal to use more volunteers and AI to help moderate the social media site in response to stringent new regulations intended to police online content.
Musk has reportedly been told to hire more human moderators and fact-checkers to review posts, according to four people familiar with the discussions between Twitter executives, regulators, and Musk in Brussels.
This need makes it more difficult for Musk to restructure the loss-making company he bought in October for $44 billion. More than half of Twitter’s 7,500 employees have been cut, including the entire trust and safety teams in some offices, as the new owner looks for less expensive ways to monitor tweets.
As with other social media platforms, Twitter currently uses a combination of human moderation and AI technology to detect and review harmful content. Contrary to its larger rival, Meta, which owns Facebook and Instagram, it does not employ fact checkers.
To combat the onslaught of misinformation on the platform, Twitter has also been using volunteer moderators for a feature called “community notes”—however, the tool is not used to address the illegal content.
People with direct knowledge of the discussions claim that Musk also told EU commissioner Thierry Breton in January that it would depend more on its AI systems.
They claimed that Breton stated that while it was Twitter’s responsibility to determine the best way to moderate the website, he expected the company to employ individuals to adhere to the Digital Services Act.
Image Credit: breitbart
In a statement to the Financial Times, Twitter said: “We have had several fruitful discussions with EU officials about our efforts in this area and intend to fully comply with the Digital Services Act.”
“We will continue to use a combination of technology and knowledgeable staff to proactively detect and remove illegal content, while community notes will allow people to learn more about potential misinformation in a way that is informative, transparent, and reliable,” the company added.
The DSA is important legislation that will make big tech companies police their websites for illegal content more rigorously. Twitter and other significant platforms must be fully compliant by September 2017, at the latest. Those who violate the rules risk fines of up to 6% of the global turnover. To comply with the DSA this year, staff will be in place, Musk told Breton, but hiring will take time.
Musk tweeted after their meeting in January, “Good meeting with @ThierryBreton regarding EU DSA. The goals of transparency, accountability, and accuracy of information are aligned with ours. @CommunityNotes will be transformational for the latter.”
More discussions about Twitter’s moderation plans have recently taken place between EU regulators and parties. Officials there have acknowledged that in a manner akin to how editors weed out false information on Wikipedia, pursuing the community notes model may help to significantly reduce the amount of misinformation.
The site does not have the hundreds of thousands of volunteer editors that Wikipedia has, and Twitter has a dismal track record when it comes to non-English language content moderation, a problem that affects other social networks.
Also Read: Elon Musk polls if he should step down as Twitter’s Head
“Platforms should be under no illusion that cutting costs risks cutting corners in an area that has taken years to develop and refine,” said Adam Hadley, director of Tech Against Terrorism, an UN-backed organization that aids platforms in policing extremist content online. “We are concerned with the message Twitter’s most recent action sends to the rest of the industry.”
The European Commission cited that “We believe that ensuring sufficient staff is necessary for a platform to respond effectively to the challenges of content moderation, which are particularly complex in the field of hate speech. We expect platforms to ensure the appropriate resources to deliver on their commitments.”