Home » News » Is Machine Learning Causing Reproducibility Crises in Science?

Is Machine Learning Causing Reproducibility Crises in Science?

(Image Credit Google)
A wave of spurious results has resulted from AI hype, causing researchers in fields from medicine to sociology to rush to use techniques they don't always understand. Human affairs have never been more messy and terrifying than civil wars. So in 2014, Princeton professor Arvind Narayanan and his doctoral student Sayash Kapoor became suspicious when they came across a political science strand that claimed to use artificial intelligence to predict civil war outbreaks with more than 90% accuracy. Several papers have described astonishing results using machine learning, the technology beloved by tech giants that lies at the core of modern artificial intelligence. In the past few days, Google has brought a new Robot that takes orders by scraping the web. Machine learning The method beat out more conventional statistical methods when applied to data such as a country's gross domestic product and unemployment rate when predicting the outbreak of civil war by almost 20 percentage points. Despite this, Princeton researchers found many of the results illusions after closer examination. Everything about how artificial intelligence is used to create devices in the century’s political backgrounds. Machine learning aims to feed past algorithm data and tune it to operate on future, unseen information. Researchers, however, failed to properly separate the pools of data used to test and train their code, resulting in data leakage that leads to the system being tested with data it has already seen before, just as a student would do if they had already been given the answers to a test. Machine learning Kapoor says the machine learning pipeline was faulty in these cases, even though they claimed near-perfect accuracy. Moreover, they found that modern AI had virtually no advantage when he and Narayanan fixed those errors. As artificial intelligence (AI) has become more advanced and sophisticated, search engine optimization must adapt accordingly.  They concluded that incorrect use of Robotics/Machine learning/Artificial intelligence is a widespread problem in modern science due to that experience. Next, researchers investigated whether the misapplication of these new approaches would distort results in other areas. Machine learning Due to its ability to unearth patterns that may be hard to discern with conventional data analysis, artificial intelligence has been hailed as a potential game-changer for science. Scientists have used AI to predict protein structures, control fusion reactors, and probe the cosmos. Science fiction movies and novels have relentlessly depicted that machines have replaced humans. In many instances, Kapoor and Narayanan warn, AI has had a less than stellar impact on scientific research. For example, researchers identified errors in 329 studies across various fields where machine learning was applied when investigated areas. Machine learning According to Kapoor, many researchers use machine learning without comprehensively understanding its techniques and limitations. As a result, the tech industry has rushed to offer AI tools and tutorials designed to entice newcomers, usually to promote cloud platforms and services. Kapoor says it has become so overblown that you can take a four-hour online course and use machine learning in your scientific research. "Nobody has taken the time to think about what could go wrong." Last month, Kapoor and Narayanan organized a workshop to draw attention to what they call a "reproducibility crisis" in science. Their expectations led to 30, but they received over 1500 people registered. So, they assume and suggest that the issue with machine learning is global. Moreover,  it’s a fact that technology has transformed how we communicate profoundly. While the event was going on, Invited speakers recounted major issues where AI had been misused in many growing fields such as social media and medicine. For example, Michael Roberts, a senior researcher at Cambridge University, talks about several papers asserting that usage of machine learning during the pandemic of Covid-19 was where the data skewed as it came from various imaging mechanisms. On the other hand, Jessica Hulllman (research associate at Cambridge University) talks about many problems with machine learning. Machine learning Momin Malik (Mayo Clinic's Data Scientist) was invited and told about his work experience with machine learning. Despite all common errors, he said that sometimes this advanced science concept is the wrong tool for the job. Malik justifies an example of why machine learning is not always a good tool. Google Flu Trends is a tool the search company developed and is one of the most misleading machine learning. In 2008, the goal was to detect and identify Flu outbreaks faster via logs of search queries typed by web users. Google gained a positive response from the public, but it failed in an awe-inspiring way to predict the session of the 2013 flu season. Later, an independent study concluded that the model had specific certain seasonal terms that have nothing to do with the prevalence of influenza and that you couldn't throw it all into the massive, massive machine learning model, and the outcomes were not good enough. Furthermore, Brain computing, a small neuromorphic, wearable chip powered by AI, mimics the human brain to analyze health data. Machine learning Most workshop attendees said it might be impossible for all scientists to become masters in machine learning. It gives some complexities that are highlighted to create significant issues while working. Amy Winecoff is a data scientist at Princeton's Center for Information Technology; he said that it is essential for scientists to master more software engineering principles and statistical techniques and then maintain data sets. However, this shouldn't come at the outlay of the domain knowledge. Throughout the whole discussion, the outcome suggests more collaboration between the scientist and the computer scientist that more collaboration between the scientist and the computer scientist to strike the right balance. Moreover, European legislators are moving toward passing an ‘Artificial Intelligence Act’ for the region to protect the public against its adverse effects. Machine learning The misuse of machine learning in science is a major problem; it can seem a significant problem in corporate or government AI projects that are less open to the outside survey. However, Malik said he is concerned about the prospects of misuse of AI algorithms as it causes real-world consequences. These facts taught us that it is good to approach everything with machine learning or Artificial Intelligence. OpenAI announced that its art-making AI system DALL-E is now available in beta. “Starting today,” the company wrote in a post, “users get full usage rights to commercialize the images they create with DALL-E, including the right to reprint, sell, and merchandise.”  Kapoor of Princeton concludes that the scientific communities have started thinking about the misuse or issues of machine learning. But the thing is out that it can have harmful long-term consequences.

By Saloni Behl

I always had a crush on technology that\'s why I love reviewing the latest tech for the readers.

RELATED NEWS

In the ever-changing world of technology and retai...

news-extra-space

In a bid to capture the attention of users and dri...

news-extra-space

Apple is preparing for a game-changing move with i...

news-extra-space

Google has been making huge headways in artificial...

news-extra-space

Elon Musk's artificial intelligence firm, xAI, is ...

news-extra-space

In a digital showdown that has captured the attent...

news-extra-space
2
3
4
5
6
7
8
9
10