It is a large language model (LLM) developed by Chinese scientists that can command military drones to attack enemy radar systems.
Scientists in China's defense industry have developed a type of AI that can enhance the performance of electronic warfare drones, according to the SCMP.
This large language model (LLM), similar to ChatGPT, can command drones equipped with electronic warfare weapons to attack enemy aircraft radars or communications systems.
Test results show that its decision-making performance in air combat not only outperforms traditional artificial intelligence (AI) techniques such as reinforcement learning, but also outperforms experienced experts.
This is the first widely published study to directly apply large language models to weapons.
Previously, this AI technology was largely confined to the war room, providing intelligence analysis or decision support to human commanders.
The research project was jointly carried out by the Chengdu Aircraft Design Institute under the Aviation Industry Corporation of China and Northwestern Polytechnical University in Xi'an, Shaanxi Province.
The institute is the designer of China's J-20 heavy stealth fighter.
The work is still in its experimental phase, according to a paper published by the project team on Oct. 24 in the peer-reviewed journal Detection & Control. Among existing AI technologies, LLM is the best at understanding human language.
The project team provided LLM with a variety of resources, including "radar and electronic warfare book series and related document collections."
Other documents, including air combat records, weapons inventory records, and electronic warfare operations manuals, were also incorporated into the model.
According to researchers, most training materials are in Chinese.
The designer of China's J-20 stealth fighter jet is part of a research team involved in the AI project. Photo: Weibo |
In electronic warfare, the attacker releases specific electromagnetic waves to suppress the radar signals emitted by the target.
Conversely, the defender will attempt to evade these attacks by constantly changing the signal, forcing the adversary to adjust its strategy in real time based on surveillance data.
Previously, it was thought that LLMs were not suitable for such tasks because of their inability to interpret data collected from sensors.
Artificial intelligence also often requires longer thinking times, falling short of the millisecond-level reaction speeds required in electronic warfare.
To avoid these challenges, scientists have outsourced the processing of raw data to a less complex reinforcement learning model. This traditional AI algorithm excels at understanding and analyzing large amounts of numerical data.
The “observation value vector parameters” extracted from this preliminary process are then converted into human language through a machine translator. The large language model then takes over, processes, and analyzes this information.
The compiler converts the large model's responses into output commands, which ultimately control the electronic warfare jammer.
According to the researchers, the experimental results confirmed the feasibility of the technology. With the help of reinforcement learning algorithms, the generative AI can rapidly adjust attack strategies up to 10 times per second.
When compared to traditional AI and human expertise, LLM is superior at generating numerous false targets on enemy radar screens. This strategy is considered more valuable in the field of electronic warfare than simply blocking with noise or deflecting radar waves away from real targets.
Source
Comment (0)