On April 26th, 2024, the Department of Homeland Security released a report on alleviating risk at the intersection of artificial intelligence (AI) and chemical, biological, radiological, and nuclear threats (CBRN threats). While a relatively new technology, AI is rapidly growing and poses a significant risk to American national security. Without a defined regulatory policy for AI, the United States finds itself at an impasse between adopting strict limitations of AI (European-style regulation) and relaxing access to allow for greater public access to such systems. With the risk posed by the continuation of a minimal AI framework, the United States must adopt European-style regulations with open collaboration to mitigate CBRN threats.
CBRN threats (due to their potential to be aided by AI) pose an increasingly unique hurdle in preventing adversaries’ strikes on U.S. soil. The stages of a CBRN attack are as follows: planning, selection of method and target, acquisition of materials, weaponization of materials, and attack. AI can streamline this process for bad actors. According to researchers at UC Berkely, AI can “provide detailed, domain-specific knowledge (either explicit or tacit) for creating or employing CBRN”. Moreover, unregulated AI can allow bad actors to research information to aid the creation of CBRN undetected by law enforcement and intelligence. Malignant actors can utilize AI to stage CBRN attacks with greater ease and efficiency. Without fear of being caught by law enforcement, those generating CBRN threats will become emboldened in their planning, method, and eventual attack. Furthermore, the ability to work covertly is an increased risk as due to this, law enforcement prevention techniques become obsolete. Thus, the potential of a successful attack is heightened. It is imperative that the United States adopt strict regulations to preempt this level of capability.
In the European Union (EU), the first AI regulation resolution was adopted in March 2024. The resolution classifies AI models into levels, with those deemed to be “high risk” and “unacceptable risk” requiring intervention and oversight. Unacceptable risk AI, such as biometric manipulation used for facial recognition or cognitive behavioral manipulation used in voice-activated toys for children is effectively banned. High-risk AI systems fall into two categories: products that are under the purview of EU product safety regulations, (automobiles, aviation, and medical devices) and those that have to be registered in an EU database (law enforcement, public benefits, and immigration management). Such systems are assessed by the EU before being made public on the market. Additionally, there is a transparency clause in which generative AI (e.g. ChatGPT) is not classified as a high or unacceptable risk, but if used, must be disclosed.
A potential flaw with this policy is the possibility of lagging in the AI development race. With strict regulations on high-risk AI, bad actors and other nations may be able to further the capabilities of their experimental AI, while those in the EU will not be able to. Without having the same level of technology, those within the EU could become a target due to their weakened capabilities. Moreover, those with strict regulations (e.g. the EU) would be unequipped to fight advanced unregulated AI as they would be unaware of the new potential applications of such AI-assisted CBRN attacks, and thus would not have a proper and thorough prevention plan in place. However, European-style regulations pose the greatest security barrier to bad actors. The decision to ban the development of unacceptable-risk AI prevents adversaries from accessing and misusing such technology for CBRN threats. Even with the risk of falling behind developmentally, the advantage gained in thwarting cybersecurity breaches that aid CBRN attacks is invaluable.
In a similar vein, strict regulation (e.g. the EU AI Act) poses a potential threat to economic growth. A potential drawback is that such regulation would stifle innovation. If businesses intend to develop technology that falls within the high-risk category of the EU framework, then regulatory pre-approval is required. Pre-approval creates an entry barrier, which in turn unintentionally yields less innovation. Likewise, EU businesses have publicly stated that such regulations would jeopardize technological sovereignty. If these regulations were to have such an impact, then the incentive to further develop AI would be lost. Other entities with more relaxed regulations may surpass those obligated to follow a framework like the EU AI Act. However, with the constantly evolving nature of AI, “there is no consensus in the academic literature on the effects of AI” in regards to economic impact. It is not feasible at this point to allow potential economic downturn in the technology industry to negate the protection gained by a framework similar to that adopted by the EU.
In the United States, there is no comprehensive AI regulatory policy, instead, there are minimal federal laws and guidelines. One of the few federal laws is the National AI Initiative Act of 2020, in which the National Artificial Intelligence Initiative Office was created to implement the U.S. AI regulatory strategy. However, there is nothing to implement. This leaves the United States in a vulnerable state to CBRN attacks. If the United States were to adopt a new framework that is more relaxed than European-style regulations, leaving AI to be more readily available, the U.S. would then open itself up to potential attacks. For example, in a study examining AI and its risk of enhancing CBRN attacks, the Department of Homeland Security (DHS) found that “lower barriers to entry for all actors across the sophistication spectrum may create novel risks to the homeland from malign actors’ enhanced ability to conceptualize and conduct CBRN attacks”. Moreover, the public knowledge of weaknesses within AI regulation in biological and chemical systems combined with the rapid development of such tools may unintentionally produce new and dangerous research for malign actors to access and use in the planning of CBRN attacks. With the continued U.S. AI regulations, or lack thereof, the threat AI poses to American security is ever present.
However, implementing such AI into detection and prevention systems may be fruitful for U.S. security. The DHS also found in their study that the “integration of AI into CBRN prevention, detection, response, and mitigation capabilities could yield important or emergent benefits”. If the U.S. were to allow for the development of such systems, even with their potential consequences, then it would allow for defense systems to be aware of the embodiments of this technology and identify it promptly. The ability to reverse engineer AI being used in potential attacks on U.S. soil would only bolster detection capabilities. However even if the U.S. were to implement this policy, other nations implementing AI may not be forthcoming with their individual developments, leaving potential indicators unseen and overlooked by U.S. detection systems. With the constant advancement of AI, information asymmetry poses a substantial threat.
In order to avoid these potential pitfalls, the U.S. must incorporate widespread collaboration and communication with other nations and entities into its policy. In their report, the DHS recommends to “collaboratively develop and encourage adoption of guardrails to protect against reverse engineering, loss, or leakage of sensitive AI model weights by both non-state and state actors”. Potential implementation of this could be incorporated through insider threat training detection systems, interagency communication, and shared guidelines with allied nations. Furthermore, it is essential that the U.S. engage with international stakeholders in order to develop comprehensive frameworks to manage AI risks. While such organizations exist to protect broad national security interests (e.g. United Nations Security Council, DHS, NATO), there is not a specific one created with the sole purpose of regulating AI. This must be rectified to subvert any potential security deficiencies that may be present with the lack of a global framework and organization dedicated to AI. In order to prevent information asymmetry, there must be not only inter-agency but global collaboration.
Collaboration cannot only be with government agencies. The rise of AI is in part due to the private sector and its ability to continuously further AI development. If all developers and critical users of AI can be in lockstep regulation policy-wise, then a robust defense system will be in place in order to mitigate malign actors. Leaving any facet of the technology and government sectors weaker in cybersecurity will make breaching highly developed AI easier, potentially leading to the weaponization of such technology. If any entity that is involved in AI development lags behind in regulations, then it immediately becomes a target for cyber attacks by adversaries in an attempt to access its technology. With the significant risk an AI-aided CBRN attack poses, this cannot occur.
European-style regulations with an emphasis on collaboration are the path the U.S. must follow for AI regulatory legislation. Building a strong regulatory framework against high-risk AI is imperative to keep it out of the hands of malign actors. Moreover, if such regulations are in tandem with open collaborations between not only governments but also the private sector, then the threat of CBRN attacks is significantly reduced. If the U.S. were to continue down its current path, AI-assisted CBRN attacks would be feasible. This era of non-regulated artificial intelligence must be put to a swift end in order to preserve American national security.