China’s PLA Explores AI Integration to Improve Military Combat Proficiency
Chinese scientists are integrating AI into military with PLA working on autonomous AI for complex battle scenarios, raising ethical issues.
Chinese scientists are pushing boundaries of military artificial intelligence (AI) by integrating ChatGPT-like technologies into an experimental project. A group of scientists in China working with the People’s Liberation Army (PLA), is trying to make the country’s military AI better at dealing with unexpected situations where humans are involved.
They want the AI to be smarter in handling scenarios with human opponents, as per several chinese media reports. China’s announcement of this AI project marks the first time the country has publicly confirmed the use of commercial large language models (LLMs) in military applications.
Moreover, this move raises questions about the potential risks and ethical considerations associated with unleashing sophisticated AI, which has been lauded for its capabilities but also criticized for the lack of control and potential for unintended consequences.
The research team has established a physical link between their AI system and commercially developed language models, namely Baidu’s Ernie and iFlyTek’s Spark.
The military AI can collect lots of data from sensors and frontline reports into words or pictures. It shares this with other commercial models. Once gathered, the military AI can talk to them without involving any human. It even creates suggestions for more detailed discussions, like practising for battles, all on its own.
The details of the project were available in a peer-reviewed paper published in December 2023 in the Chinese academic journal – Command Control & Simulation. In a research paper, scientist Sun Yifeng and his team from the PLA’s Information Engineering University stated that both people and machines could benefit from the project.
One of the key concerns raised by computer scientists is the potential risk associated with this level of autonomy. Comparisons have been drawn to the scenarios depicted in popular culture, with one scientist cautioning that without careful handling, the situation could evolve into a narrative similar to the Terminator films, where AI becomes uncontrollable.
The published paper sheds light on the project’s goals, emphasizing the desire to make military AI more “human-like.” This involves better understanding the intentions of commanders at all levels and improving communication with human counterparts.
The integration of commercial large language models is seen to deepen the military AI’s understanding of human behavior.
Predicting the Next Move on the Battlefield
In a simulated experiment outlined in the paper, the military AI provided information about a hypothetical US military invasion of Libya in 2011 to Ernie. After several rounds of dialogue, Ernie successfully predicted the next move of the US military.
The research team contends that such predictive capabilities could compensate for human weaknesses, addressing issues such as biases in human cognition that may lead to overestimating or underestimating threats on the battlefield.
However, the disclosed information in the published paper is acknowledged as only the tip of the iceberg. The research team has deliberately kept certain aspects of the project confidential, including how military and commercial models can learn from past failures and mutually acquire new knowledge and skills.
It’s worth noting that China is not alone in exploring the military applications of AI. Various branches of the US military have expressed interest in similar technologies, exploring applications ranging from intelligence analysis and psychological warfare to drone control and communication code decryption.
As the global race for AI supremacy intensifies, the cautionary voices of scientists underscore the need for responsible and ethical development, emphasizing the potential risks posed by unrestrained access to powerful AI systems to military networks and confidential information.
In line with the Trust Project guidelines, please note that the information provided on this page is not intended to be and should not be interpreted as legal, tax, investment, financial, or any other form of advice. It is important to only invest what you can afford to lose and to seek independent financial advice if you have any doubts. For further information, we suggest referring to the terms and conditions as well as the help and support pages provided by the issuer or advertiser. MetaversePost is committed to accurate, unbiased reporting, but market conditions are subject to change without notice.