Rethinking Deterrence in the Age of Artificial Intelligence

0
313

Rethinking Nuclear Deterrence in the Age of Artificial Intelligence

Image credit: Modern War Institute at West Point

by Anum A Khan     20 October 2023

The seemingly most notable global shift in the past decade has been a rapid transformation in technological innovations. The 21st century has brought significant new technological developments for waging warfare, both intuitively and intentionally. Artificial Intelligence (AI) is increasingly becoming relevant and important and is constantly changing our world. Although perceived as the latest, the twin robots Spirit and Opportunity sent to Mars in 2001 for a 90-days geological mission could work for 15 years due to AI-based problem-solving. The military is now making hay in AI sunshine, and its application is considered almost as revolutionary as the discovery of gunpowder. AI allows machines – weapons – to perform intelligent tasks such as learning, planning, reasoning, and executing missions, simulating the natural intelligence displayed by humans.

While AI has exciting prospects for knowledge sharing and innovation, this significant development in tech improvement also raises serious concerns regarding its future implications on national security, command and control systems, and strategic stability at the regional and international levels.  Once in 2017, President Putin exclaimed that “whoever reaches a breakthrough in developing AI, will come to dominate the world.” He was stating the obvious and partly vowing to catch up the American technological lead. In case of AI, it is the technology driving the strategy. The character of warfare has not changed with AI, as it seeks to “force and compel our enemy to do our will.”

The use of AI in war is incredibly controversial, but undeniably advancing. Thanks to the American and NATO’s unprecedented help, Ukraine has produced asymmetric results against Russian “special military operation”. In this proxy-war, AI, satellites, drones, and cyber capabilities rendered to Ukrainian testbed have been central to the outcomes in conflict thus far.

Lethal Autonomous Weapon Systems (LAWS) present several challenges and threats, which are needed to be explored thoroughly in light of the use of AI in military operations. These include AI enabled systems which can cause miscalculations that cannot only induce trust deficit among nuclear weapons states, but can also lower nuclear thresholds, thereby fueling chances of preemptive strikes during crises – particularly given the race for this advanced weaponry. For example, between 2017 and 2021 the projected spending on Russian drones was $3.9 billion, China spent $4.5 billion, and the United States spent $17.5 billion.

AI is being integrated into UAVs, LAWS, missile defense systems, submarines, and aircrafts. The US National Security Strategy 2022 focuses on joint capability development and information sharing among allies while, simultaneous deployment of such technologies in timely manner to safeguard a shared military-technological edge. Under the third offset Strategy, the US has introduced a Global Surveillance and Strike (GSS) system to counter the proliferation of critical disruptive technologies among others. Chinese ambition is to become an AI superpower by 2030. In 2022, China claimed to succeed in developing AI enabled air defense system model to predict the trajectory of a hypersonic glide missiles while being able to launch a swift counterattack. China is also developing a range of autonomous weapon systems. For instance, China has deployed robotics and unmanned systems in land, air, sea, and space domains. Some of these systems are AI enabled but are not in targeting process. Russia is focused on use of AI in maritime domain which include swarm of underwater combat drones. Israel’s Iron Dome anti-missile system independently detects and shoots down missiles.

In South Asia, India has established a Center for AI and Robotics (CAIR) under DRDO to bandwagon the arms race of AI for military purposes with AI enabled weapons capabilities. Moreover, in India’s Land Warfare doctrine of 2018, there has been a significant emphasis on AI and its integration in military systems. With the two-front war bogey against Pakistan and China, India is motivated to acquire economic, diplomatic, and military support by the US which also incorporates emerging technologies.

In this regard, Indian military elites have emphasized to employ AI in military systems before it is too late. India has been developing the Multi-Agent Robotics Framework, which would likely act as a team of soldiers and assist the Indian army in future wars. India has also acquired 200 DAKSH Autonomous Robots also called Remotely Operated Vehicle (ROV) for defusing the bombs in dangerous situations. India is also collaborating with Japan in the field of Robotics and AI and its applications in military systems. Simultaneously, India has been working towards more sophisticated uses of AI in defense and military sectors. These include image interpretation, target recognition, the objective range, and kill zone assessment of missiles, and utilization of robots in more ungraded forms. The recent Indo-US ambitious technology, space and defense initiatives also include cooperation with regards to AI and quantum computing to counter China.

Experts believe that there is a need to rethink traditional concepts of deterrence and strategic stability in this third nuclear age due to greater emphasis on such emerging technologies. The AI induced systems can process data much faster than humans which can help shorten the OODA (Observe-Orient-Decide-Act) cycle, thereby, removing fog of war. Nevertheless, AI still cannot be programmed with situational awareness mirroring human understanding. During crisis, if states use AI to detect and target, and the adversary perceives that its opponent will act sooner via AI, it can compel the adversary to resort to preemptive strikes due to the fear of being attacked first. Also, as AI enabled systems lack situational awareness, any glitch in the system and data which requires factoring in situational awareness, can result in incorrect data analysis, thereby, can lead to escalation or even nuclear use.

Swarms of surface/underwater unmanned vehicles may be used to detect nuclear submarines. However, other experts are of the view that due to vastness of the ocean, hundreds and thousands of underwater swarms may be needed to detect an SSBN as most of them patrol in open seas. So, the most conducive options will be to used the AI enabled detection capabilities insure or at identifiable choke points. This move itself can be considered escalatory in contemporary times. It is so because, the missiles tipping SSBNs are now with longer ranges where there is an increase in patrol areas of SSBNs especially in the Afro-Asian Ocean. For instance, Indian SSBN – INS Arihant – was deployed near Pakistani waters during crisis in May 2019, which was seen as an escalatory move.

India is opting for regional maritime domain awareness through which it has an edge to map data in the Afro-Asian Ocean via AI enabled technologies which can be used in pre-conflict scenarios. This can result in hyper war due to fast data analysis. It can have drastic impact on strategic stability between India and Pakistan.

Interestingly, like nuclear weapons, monopolies of major powers may not last on AI including LAWs. Over time, if these technologies are not regulated under international law, LAWs will also proliferate and become vulnerable. One can only imagine the consequences of such technologies proliferating to quasi-State actors.

The whole concept of deterrence lies with the fact that these are political weapons and not weapons of use, rather, weapons to deter an adversary. If, by any means, target selection and decision making is left with AI enabled weapons, “to deter” may just lose its credence and nuclear taboo – which is maintained since nuclear use by the US – may be broken.

This shows that AI will alter deterrence dynamics and coercion between nuclear weapon states in unique ways. Hence, there is a debate regarding humans in the loop at international forums. Foregoing in view, a pre-emptive ban on development of LAWS would be ideal rather than efforts to regulate their deployment and use. CBMs and responsibilities approach through non-legally binding Transparency and Confidence Building Measures (TCBMs) are only interim and half-measures to deal with the grave threat world is facing today. There is a need of a collective moratorium on use of AI for military purposes, until a legally binding instrument is concluded as the most viable solution.