According to the South China Morning Post, Chinese scientists have created AlphaWar, an artificial intelligence (AI) machine that behaves exactly like humans during military simulations or so-called war games. Additionally, even though some highly skilled military strategists engaged in lengthy games either with or against AI, they were unable to identify it as a machine.
Furthermore, the team asserted that AlphaWar had passed the Turing test in a study published on February 17 in the Chinese-language journal Acta Automatica Sinica. In addition, the computer was given the name AlphaGo after Google DeepMind’s AI, which was the first to outperform the best human players in the Chinese board game of Go.
Moreover, Alan Turing created the Turing Test in 1950 to test whether a machine can exhibit intellectual behavior comparable to or indistinguishable from a human’s. The Turing Test is a technique for figuring out whether computer software can trick a person into thinking they are speaking to another person rather than a machine. It’s a game where the player tries to figure out if they are speaking with a computer or a human person by observing how they react to inquiries and comments. It is used to evaluate artificial intelligence’s level of development and its ability to mimic human behavior and intelligence.
And in the case of AlphaGo, it managed to evade discovery in professional war games.
AlphaWar was created by a group at the Beijing Institute of Automation, Chinese Academy of Sciences, under the direction of Professor Huang Kaiqi. Huang and his co-authors noted that the AI machine passed the test in 2020 in their research without going into detail. And why it took more than two years for the news to reach the public is unknown.
On another note, even the most advanced computers have had trouble replicating a real-world conflict, according to Huang’s team.
In recent years, artificial intelligence (AI) technology has developed quickly and defeated human players in challenging video games like Starcraft, which is substantially more difficult than Go.
Humans, however, are some of the most important elements of the military war game because they are capable of making random mistakes or pulling off unexpected feats in a highly unstable environment with no prior knowledge of the adversaries.
According to Huang’s team, it is very challenging for AI to learn and reproduce this unpredictability, which has the potential to change how combat plays out. They continued by saying that a war game is not a game because the results could result in many people’s lives or deaths.
The research paper suggests that AlphaWar could provide more effective strategies than people. The machine gets better at what it does by learning from military strategists or by competing against itself. In numerous areas, including unit coordination and weapon use, it still needs to advance and catch up to the finest human strategists.
According to Huang’s team, several new technologies, like substantial language models akin to ChatGPT, may be employed to enhance AlphaWar and other AI war games’ performance.