Using non-traditional checks
Actors outside of the U.S. government can challenge unlawful use of AI
As Chapter 1 discussed, we tend to assume that the only actors who have access to government secrets are executive officials (including whistleblowers and leakers), members of Congress (and some of their staff on the intelligence, armed services, and foreign affairs committees), and federal judges who sit on the FISC or occasionally hear criminal cases that implicate classified evidence or civil cases that involve state secrets. As a result, we often assume that those actors are the only ones who are positioned to track secret government activity and assess whether that activity comports with our public law values.
In fact, other types of actors have access to secret U.S. government information and activities, including foreign allies that are U.S. partners in NATO or the Five Eyes alliance; some U.S. state and local officials; and certain companies, including technology companies and defense contractors. These three categories of actors have specific expertise about new threats and targets. They have access to information or infrastructure that the federal government needs to execute its national security mission. And they often have legal or political commitments not to reveal the information (through nondisclosure agreements, contracts, or other information-sharing arrangements). From this perch, they can assess whether the U.S. government is acting lawfully and, in some settings, can challenge unlawful actions. They can assess whether U.S. policies are effective and sensible and they may have incentives to urge the government to change unsound policies. Further, their presence behind the curtain of secrecy provides a setting in which the United States may need to explain and justify the decisions that it makes. We can find examples of these checks in the cyber, electronic surveillance, and counterterrorism space—which means it is worth considering whether we will see (or can stimulate the operation of) similar checks in the national security AI (NSAI) space.
In the AI setting, NATO allies are likely to have some visibility into the types of tools the U.S. military is developing, acquiring, or deploying. By virtue of the U.S. need for battlefield interoperability, these allies can influence—though not necessarily dictate—what types of NSAI the United States ultimately adopts. States and localities are less likely to have direct access to information about the federal government’s NSAI tools and systems. However, these actors have begun to adopt laws that implicate unclassified uses of AI, including limits on the use of facial recognition software. These restrictions may impact the types of tools that companies produce and that the federal government adopts. States and localities can also serve as canaries in the coal mine, serving as a valuable source of information about what American citizens do and do not want their governments to do with AI.
From this perch, they can assess whether the U.S. government is acting lawfully and, in some settings, can challenge unlawful actions. They can assess whether U.S. policies are effective and sensible and they may have incentives to urge the government to change unsound policies.
Technology companies have a complicated role to play: they are at the leading edge of developing advanced AI and so will play a key role in shaping the tools that the government uses. This means that companies can—and often will—drive technological developments forward quickly, to stay ahead of the competition and make money. But this also means that companies could slow the development of NSAI tools that they believe are illegal or immoral.
Finally, the public has a role to play in unpacking the double black box. Members of civil society, including nongovernmental organizations and think tanks, can use FOIA to obtain information that was formerly classified, ask hard questions of the government, and write or speak about what they learn. The public also can pressure Congress and the Executive when journalists report on activities that do not comport with our public law values. Further, academic researchers can advance public knowledge about AI reliability, safety, and explainability and propose industry-wide ethical guidelines that may influence how government officials and defense contractors perceive their own (classified) work. The public can thus work to peel back the curtain of secrecy using lawful tools and, even absent classified information, can articulate its views about unclassified AI systems that may have classified analogues behind that curtain.
These “nontraditional” surrogates that are poised to provide checks on the Executive need not themselves act in a manner that is consistent with the public law values that we expect from the U.S. government. For instance, technology corporations are often not transparent in their activities—and generally are not legally required or expected to be. Fortunately, though, foreign (democratic) allies and U.S. states and localities face expectations that they will embody some set of public law values by virtue of being representative governments, even though their polities differ from the U.S. polity as a whole. Actors that are themselves committed to public law values may be more attractive and effective as secrecy surrogates, because they are more likely to be attuned to whether the Executive is practicing these values and be more familiar with the underlying mechanisms that advance or inhibit such values.
Actors that are themselves committed to public law values may be more attractive and effective as secrecy surrogates, because they are more likely to be attuned to whether the Executive is practicing these values and be more familiar with the underlying mechanisms that advance or inhibit such values.
(…)
B. Foreign Allies
Other than Congress, foreign allies are perhaps the group best positioned to urge the U.S. government to act lawfully, effectively, and with justification. Friendly foreign governments with advanced military and intelligence services interact frequently with their U.S. counterparts. To the extent that the United States wants or needs to work with these foreign partners to accomplish its military, counterterrorism, or intelligence goals, these partners have leverage over and may serve to constrain U.S. actions. These checking mechanisms do not require formal international agreements between the United States and another state; indeed, they frequently arise in the absence of such mechanisms.
Thinking about foreign governments as surrogates for the U.S. polity in relation to secret government operations may seem counterintuitive. After all, foreign partners have distinct foreign policy and national security interests that do not fully align with those of the United States. They have legal and political duties to their own citizens, and sometimes their legal obligations impose higher or lower standards than U.S. law does. But they often have incentives to prod the U.S. Executive to adhere to its own public law values, and they have leverage over the Executive: the ability to share or withhold intelligence or to grant or withhold consent to use their territory, airspace, or cyber infrastructure for military, counterterrorism, or cyber operations. They benefit when, in joint operations, the U.S. Executive acts competently (because it enhances their own security) and lawfully (as it minimizes the likelihood that they will find themselves facing adverse parliamentary or judicial oversight). When foreign allies themselves take seriously public law values, including the need to ensure that their partners adhere to the law and justify their decisions, they can serve as useful secrecy surrogates.
These actors have served as checks in a number of classified settings before. The U.S. intelligence community, which is constrained by specific domestic and international laws, often works with the intelligence services of other states, each of which has its own legal obligations. A foreign intelligence service “can impose forms of discipline or structural limits on the activities of its counterparts, particularly when it implements its own domestic and international legal obligations,” and can affect how the United States “conducts activities such as interrogation, detention, targeted killings, and surveillance; the amount and type of intelligence the [United States] receives; and, less tangibly, the way in which the [United States] views its own legal obligations.” Today, the United States undertakes robust intelligence sharing and classified operations with foreign allies not only in traditional military settings but also in the cyber, elections, and counterterrorism contexts.
When foreign allies themselves take seriously public law values, including the need to ensure that their partners adhere to the law and justify their decisions, they can serve as useful secrecy surrogates. These actors have served as checks in a number of classified settings before.
(…)
Peer constraints abound in the context of military coalitions. Consider NATO’s operations in Kosovo in 1999, which involved air strikes on Yugoslav and Serbian forces. During those operations, “the byzantine American procedures for approving targets needed to be replicated by every NATO government and its lawyers.” That is, each target that NATO bombed had to meet the highest common denominator of acceptability among twenty-eight NATO states. A state that interpreted LOAC targeting rules particularly narrowly could “turn off” a proposed target that did not comply with that narrow interpretation.
These examples illustrate not only that the United States and its close allies cooperate on sensitive operations but also that those allies have opportunities and incentives to explore why the United States believes that certain actions are legal. Additionally, allies can press the United States about the effectiveness or wisdom of using particular tools and the reliability of its intelligence. And they can force the United States to justify why and how it is choosing to undertake a particular course of conduct, while allowing the operations to remain secret. Further, if one ally experiences aggressive oversight from outside actors such as parliamentarians or independent commissions, that outside actor’s oversight can affect the way in which the United States operates. In a few cases, foreign prosecutors and courts attempted to hold their own officials accountable for counterterrorism actions that their officials took in partnership with the United States. Of course, there may be cases in which an ally faces limitations on serving as a robust check on the Executive, including when the ally worries that it will lose access to U.S. intelligence and cooperation if the United States begins to view it as a difficult partner. Further, some allies have insufficient intelligence capabilities to be able to detect flaws in U.S. analysis or conclusions—though even these allies still can ask probing questions about the intelligence on its face.
From these past examples of military and intelligence cooperation, we can extrapolate how these “peer constraints” might operate in the NSAI setting and how U.S. allies could enhance the U.S. military’s and intelligence community’s compliance with public law values. Both sets of U.S. agencies are likely to engage their counterparts on current and future uses of NSAI tools. Allies can enhance the U.S. government’s compliance with international law, because allied officials represent policy and legal experts with whom the United States can discuss difficult questions about, for example, LOAC. Working out how autonomous machine learning systems can comply with rules such as distinction and proportionality is a complicated legal and technical issue, one that discussions with peer militaries might elucidate. Further, an ally could refuse to cooperate with the U.S. military if U.S. AI systems violate those rules or if the systems are so opaque that it is impossible for the ally to have confidence that the system is law-compliant.
Allies can enhance the U.S. government’s compliance with international law, because allied officials represent policy and legal experts with whom the United States can discuss difficult questions about, for example, LOAC.
Peer states can also impose physical restrictions on U.S. operations that involve systems they think may be unlawful. For example, if the United States embedded autonomous command and control into its nuclear launch systems located in Europe, host states that believed such systems were unlawful or unwise could pressure the United States to reverse such a decision—or even deny the United States consent to use such systems on their soil. Or allies could decide that the United States was not permitted to launch fully autonomous lethal weapons systems from their territory or use them in joint operations with those allies. NATO as an institution, which possesses large quantities of useful military data, already may condition its data-sharing on the ways in which member states use that data.
Second, allies can serve as a check on the quality and effectiveness of U.S. NSAI systems by refusing to cooperate with the United States if they are concerned that the U.S. systems are technologically or morally unsound. Those allies will themselves confront the difficulty of penetrating U.S. algorithmic black boxes, however.
Third, allies might force the United States to explain and justify its choice to use a specific NSAI system—and explain and justify the decisions or recommendations that emerge from that system as well. The United States might well offer those explanations in an effort to persuade allies to conduct joint operations using those AI tools. Assume, for example, that the United States detained someone during a joint operation with the United Kingdom on the basis of a machine learning recommendation. U.K. forces might insist on understanding the parameters and confidence levels of that machine learning algorithm before being willing to participate in the person’s interrogation or to share information about the person with U.S. forces. These measures would not necessarily preclude the United States from using such machine learning systems, but they would add friction to their use, especially where the systems lacked explainability features or had not yet proven reliable over time.
Excerpted from The Double Black Box published by Oxford University Press ©2025