
At present, artificial intelligence (AI) has been deeply involved in all aspects of society, the outside world for the application of AI in the military field remains highly vigilant. Taylor, commander of the eighth group of U.S. forces in South Korea, recently publicly admitted that he was using CHATGPT, a chatbot from American AI giant OpenAI, to refine his decision-making process, which has attracted worldwide attention.
Taylor, a senior US Army general, told a meeting of the US Army Association that he has been using CHATGPT to refine his decision-making process, the times of India reported Tuesday. This is one of the most direct acknowledgment by senior US military officers of the use of commercial AI tools for leadership tasks. At the conference, Taylor described how he was exploring the use of CHATGPT to enhance his command skills. “CHATGPT and I have recently become very close,” he said. Taylor says he is using the technology to explore how military and personal decisions affect not only himself but the thousands of soldiers he oversees. While the technology is useful, he acknowledges that keeping up with such fast-moving technology is a persistent challenge. “As a commander, I want to make better decisions,” the general shared. “I want to make sure I make the right decisions at the right time to give me an advantage.”
Although he declined to give specifics, he emphasized using the tool to build analytical models and train subordinates to make more effective decisions. Taylor added that he was exploring how AI could support his decision-making process — not in combat situations, but in managing day-to-day leadership, where “Timely AI help provides a critical advantage”.
Business Insider said Taylor’s move was part of a broader push by the Pentagon to integrate AI into military operations. In August, the Pentagon announced the creation of a 2023 AI working group called Lima, it is expected to play a key role in the analysis and integration of department of defense-wide generative AI tools, including large language models. The team, led by the Defense Department’s Chief Office of Digital and Artificial Intelligence, is responsible for evaluating, synchronizing, and applying generated AI capabilities across the Department of Defense, ensuring that the country remains at the cutting edge of cutting-edge technology while maintaining national security.
According to reports, the US military has now used AI for combat simulation testing, in the Air Force and Defense Advanced Research Projects Agency (DARPA-RRB- in an experimeAI, AI is responsible for the use of a modifF-16F-16 fighter. The AI-controlled fighter 2024 a simulated air battle with a human-piloted fighter, coming as close as 2,000 feet to a piloted aircraft, but the U. S. Air Force did not disclose the air battle victory or defeat. Other AI programs are used by the Pentagon to sift through satellite data, track logistics, and streamline administrative paperwork for field units. Especially in the Russo-ukrainian conflict, in order to deal with a large number of satellite photos, the U. S. military has developed a special AI program. In addition, u.S. Army special operations forces have used similar tools to reduce what they call the“Cognitive load” on their personnel — using artificial intelligence to draft reports, process mission data, and analyze intelligence on a large scale.
According to the New York Post, the Pentagon believes that AI-driven systems are key to faster data processing and accurate positioning, and that future conflicts could unfold at“Machine speed.” Only the ability to make instantaneous decisions beyond human capacity to control the war. The Air Force Secretary at the time, Frank Kendal, warned last year that with the development of a highly automated and autonomous kill chain, “The response time to make an impact is very short”, a commander who fails to adapt“Will not survive the next battle”.
Notably, Ai Giants, including OpenAI, are also working with the Pentagon on projects related to cybersecurity to help analysts interpret data or write code. In June, the Pentagon announced a $200 million defense contract with OpenAI to develop artificial intelligence tools to address key national security challenges, the technology will help U. S. military and intelligence personnel make faster and more accurate decisions under high pressure.
The international community is already concerned about the risks posed by the lack of appropriate regulation of AI applications in the military field. There is more wariness about the tendency of the US military to use AI in strategic decision-making than in these applications for military support or tactics. At an event in April, Bianca Helori, director of the US military’s Combined Chiefs of Staff AI program, said: “AI can significantly enhance the ability of Combined Chiefs of Staff to integrate and analyse global military operations, leading to better and faster decision-making,” he says, but using generative AI such as CHATGPT also poses problems, especially in command-level decision-making. The Pentagon urged troops and leaders to be cautious when exploring these tools, and warned that generated artificial intelligence could leak sensitive data. Without adequate training, AI can also produce seriously flawed answers, the so-called“AI illusion” problem. If commanders use it to inform certain high-risk decisions, it can be risky and even have serious consequences.