The Wikimedia Foundation recently partnered with Meta to help the former validate the accuracy of citations on Wikipedia. The reason for this move was that a US teen created a scandal two years back by writing 27,000 entries in an unknown language on Wikipedia. Thus, this proves that the online encyclopedia might not always have the correct information. Most of the time, people try to edit data on Wikipedia out of malice. But, even well-intentioned people can make factual errors on the site.
Furthermore, the footnotes on Wikipedia are too many for its volunteer editors to verify. In addition, more than 17,000 articles are going up on the website almost every month. As a result, numerous citations are incomplete, missing, or inaccurate. So, Meta created an AI (artificial intelligence) model that automatically scans several footnotes methodically to verify their accuracy.
Moreover, the AI model also suggests alternative citations if it discovers poorly sourced information. It uses a Natural Language Understanding (NLU) transformation model to understand the varied relationships of words and phrases in a sentence. In addition, it uses Meta’s Sphere database as its “knowledge index.” The Sphere database has more than 134 million web pages. Meta’s AI model is designed to find a single source to validate every claim written in articles on Wikipedia.
Additionally, Meta even demonstrated the capabilities of its AI model. The company shared an example of an incomplete citation on a Wikipedia page. Experts believe that Meta’s AI tool could be beneficial for resolving Facebook’s misinformation issues. “More generally, we hope people can use our work to assist fact-checking efforts and increase the general trustworthiness of information online, “stated the AI model’s creators. Meanwhile, currently, with the help of this AI model, Meta plans to build a platform for Wikipedia editors so they can easily verify and correct citations.