The US election might be afoot, but the Biden administration is still in charge for now. And the White House last month issued a National Security Memorandum on AI that is important to acknowledge.
At nearly 40 pages (of legalese), the memo is the most comprehensive public articulation of US national security strategy and policy towards artificial intelligence. Along with the accompanying Framework to Advance AI Governance and Risk Management in National Security, it seeks to address growing concern the United States is relinquishing aspects of its global lead in AI to China. The US administration has accepted that AI is an unstoppable force and essential to national security.
It is now official policy that the United States must lead the world in the ability to train new foundation models, and the security apparatus is directed to make it so.
National Security Adviser Jake Sullivan described the memo as a roadmap to ensure the US lead in AI is translated to action and military edge, quickly. The need for speed is paramount. “We have to be faster in deploying AI and our national security enterprise than America’s rivals are in theirs.”
This represents a significant shift. It is now official policy that the United States must lead the world in the ability to train new foundation models, and the security apparatus is directed to make it so.
The memo expands AI to include a lengthy supply chain. Data, connectivity, energy generation and access, compute capacity (including semiconductors) and workforce are now included, elements I’ve described as the “architectures of AI”, in a policy paper.
The memo has a significant focus on AI safety initiatives, which has been designed explicitly to increase the adoption of frontier AI tools by the intelligence community. In the Australian context, ethical considerations of AI meant areas of intelligence production were not able to be automated. If the focus on ethical adoption is overtaken by a (real or perceived) need for speed, this could have significant ramifications for trust in intelligence agencies and their long-term capabilities.
The United States has worked to maintain its lead over China in AI computing infrastructure using mechanisms including controls on chip exports and outbound investment. The inclusion of energy as a critical component of the underlying infrastructure for AI should come as no surprise. In September this year, the White House announced data and energy-related measures following an industry CEO roundtable.
Importantly, the memo calls for intelligence collection on AI (and foreign threats to US AI markets) to rise to a top-tier intelligence priority. It directs US agencies to work with AI developers on cybersecurity and counterintelligence to protect innovations and reduce espionage efforts to steal US technologies. This accords with the Five Eyes startup advisory campaign. It’s also a message to US industry, especially tech companies, that the United States wants to incorporate AI products rapidly into intelligence systems and in a way that reduces overlap, gaps and conflicts. To do so, the tech sector will need to improve its engagement with the intelligence community.
On Monday, Meta announced US and possibly Five Eyes agencies – and contractors – would be allowed to use Llama, its open-source AI model, previously banned under their terms of service.
The White House has also directed the intelligence community to “take actions to protect classified and controlled information, given the potential risks posed by AI”. It must consider how AI may affect declassification, as “AI systems have demonstrated the capacity to extract previously inaccessible insight from redacted and anonymised data”. This recognises that little is likely to remain secret forever. Much more can be known and inferred in an AI era – there is a shift in the role secrecy plays in intelligence.
The memo includes a vital aspiration – to set the global leadership direction with the safety, security, and trustworthiness of AI as an international value proposition. It seeks to balance national security objectives with human rights and responsible use of AI. However, this section has the least formed policy measures. It is unclear if the US government will in fact prioritise multilateralism as it shapes the global AI landscape.
While there is plenty to appreciate, the memo’s focus on speed contrasts sharply with the early guidance on AI ethics. For example, the Pentagon’s Responsible AI toolkit will be updated. According to some analysts, the memo leaves ethics as an afterthought. University of Virginia Law Professor (and former NSC advisor), Ashley Deeks argues it is especially important when operating in secrecy to justify decisions, but the memo does not appear to contemplate external oversight.
Irrespective of who takes office after the presidential election, aspects of this policy are likely to remain. Both Kamala Harris and Donald Trump will continue to assert US leadership in AI. While some reports suggest Trump would repeal the underlying executive order, it seems likely either way there will be a strong focus on moving quickly on AI and securing its long supply chain.
The candidates may disagree about what exactly that looks like and how to go about it, however there is bipartisan consensus that adoption of AI technologies for national security purposes and maintaining a lead in tech competition with China is critical.
The real question is whether the United States can drive the adoption of safe, responsible and transparent use of AI if speed is the primary objective.