(Image credit- Sky News)
According to a study performed by British researchers, OpenAI’s ChatGPT has a liberal bias.
This emphasizes the continued difficulty encountered by artificial intelligence companies in controlling the conduct of bots, especially given that they are used by millions of users around the world.
Manifesting Systematic Bias in Political Responses
ChatGPT was requested to react to a survey about political opinions as part of a research project run by the University of East Anglia scholars. According to the reports, this tries to illustrate how backers of liberal parties in the US, the UK, and Brazil might approach these issues.
Additionally, the same questions were subsequently presented to ChatGPT with no further instructions, allowing for a comparison of the two sets of responses.
According to the researchers, the findings revealed a “significant and consistent political inclination towards the Democrats in the U.S., Lula in Brazil, and the Labour Party in the U.K.,” using the progressive leader Luiz Inácio Lula da Silva of Brazil as an example.
Fabio Motoki, a lecturer at the University of East Anglia in Norwich, England, and a co-author of the most recent study cautioned that while ChatGPT claims to have no political attitudes or convictions, the truth tells a different tale.
Furthermore, he emphasized the reality of hidden biases and expressed concern about the erosion of public confidence and the potential impact on election results.
Testing out ChatGPT
The researchers devised a novel technique to assess ChatGPT’s political neutrality. They prompted the AI to adopt several political personas and respond to more than 60 questions on beliefs.
They next contrasted these responses with ChatGPT’s default answers to identical queries. They were able to see how the AI’s responses were affected by its political viewpoints as a result.
According to the reports, the researchers used a deliberate strategy to address the issues brought on by the “large language models”‘ inherent unpredictability, which is the basis for AI systems like ChatGPT. Each question was asked 100 times, and various answers were collected.
The reliability of the inferences made from the generated text was improved by subjecting these numerous responses to an intensive 1,000-round “bootstrap” process.
Resulting in Biased Responses
According to the reports, ChatGPT obtains a considerable amount of text data from the internet and other sources. The researchers identify potential biases in this dataset that might affect the chatbot’s responses. The way the algorithm is configured to respond is another potential influence. According to the experts, this can accentuate any biases present in the training set of data.
Similar findings have been made by researchers at the Universities of Washington, Carnegie Mellon, and Xi’an Jiaotong who discovered political bias in AI language models like ChatGPT and GPT-4.
They asked questions about feminism and democracy and discovered that ChatGPT and GPT-4 have a left-wing libertarian inclination, while Meta’s LLaMA has a right-wing authoritarian inclination.
Meanwhile, the behavior and capacity to recognize false information and hate speech are impacted by training algorithms with biased data.