Chinese researchers linked to the People's Liberation Army (PLA) have developed an AI tool, named "ChatBIT," which leverages Meta's publicly available Llama 2 large language model (LLM) for potential military applications. This development has significant implications, as it highlights the adaptability and strategic use of open-source AI models by military entities.
The ChatBIT tool is designed to gather and process intelligence data, as well as provide accurate and reliable information for operational decision-making in military contexts. According to an academic paper published in June, the researchers fine-tuned the Llama 2 13B LLM to optimize it for dialogue and question-answering tasks specific to military scenarios.
This use of Meta's AI model, however, violates Meta's Acceptable Use Policy, which explicitly prohibits the use of its models for "military purposes, warfare, nuclear industries or applications, espionage, or activities under the U.S. Department of State's International Traffic in Arms Regulations (ITAR)". Molly Montgomery, Meta's director of public policy, emphasized that any use of the company's model by the PLA is "unauthorized and contrary to our acceptable use policy".
The fact that Chinese researchers were able to adapt an older version of the Llama model for military use underscores the broader debate about the risks and merits of open-source AI. China's investment in AI is substantial, with reports indicating that the country is spending over a trillion dollars to surpass the US in AI capabilities. This context makes the use of an "outdated version of an American open-source model" particularly noteworthy, as it suggests that China is actively leveraging available technologies to advance its own AI initiatives.
The ChatBIT tool has been reported to perform impressively, achieving nearly 90% of the capabilities of OpenAI's ChatGPT-4 in certain tasks. However, the exact performance metrics and whether the tool is actively deployed remain unclear.
This development also aligns with China's broader strategy of integrating AI into various sectors, including defense. The country has been exploring multiple AI models for different applications, such as a chatbot trained on "Xi Jinping Thought," which reflects China's focus on socialism with Chinese characteristics. This chatbot, developed by China's Cyberspace Academy, operates exclusively on local servers and is designed to provide answers and generate reports within the ideological framework approved by the state.
The international community, particularly the US, is closely monitoring these developments due to concerns about national security. The US government has recently announced measures to manage risks related to AI, including restrictions on investments in sectors that may pose national security risks. Analysts like William Hannas from Georgetown University's Center for Security and Emerging Technology suggest that efforts to restrict Chinese scientists from accessing Western AI advancements may not be fully effective.
In summary, the creation of ChatBIT by Chinese military researchers using Meta's Llama 2 model highlights the complex and sensitive nature of AI development and its potential military applications. This incident underscores the need for stringent policies and international cooperation to regulate the use of AI technologies.