Why Artificial Intelligence Might Not Win a War

COMMENTARY Big Tech

Why Artificial Intelligence Might Not Win a War

Mar 16, 2021 4 min read
COMMENTARY BY
James Jay Carafano

Senior Counselor to the President and E.W. Richardson Fellow

James Jay Carafano is a leading expert in national security and foreign policy challenges.
It’s widely presumed that artificial intelligence (AI) will play a dominant role in future wars. Yuichiro Chino / Getty Images

Key Takeaways

Machine learning, developing processes that mimic human brain functioning, is patterned on how brain cells work in a neural network.

Machine learning will no doubt be part of a family of technologies that delivers a next-generation of computer services.

If neural networks and machine learning remain the dominant AI guiding technologies, the operational usefulness of AI in battle will be rather limited.

It’s widely presumed that artificial intelligence (AI) will play a dominant role in future wars. Maybe not Skynet and Terminator-level stuff, but plenty of independent hunter-killer vehicles blasting each other and the rest of us.

However, the way the future unfolds might be nothing like that. AI developments increasingly led by machine learning-enabled technologies, seem to be going in another direction.

The History of Artificial Thinking

AI is a fairly plastic term. Its meaning has shifted over time, reflecting changes in both our understanding of what intelligence is and in the technology available to mimic this. Today, AI is mostly used to describe a broad range of technologies that allow computers to enhance, supplement or replace human decision-making. Machine learning is just one of this family of technologies.

Since the adoption of modern computers, terms such as “artificial intelligence” have conjured-up images like those of the HAL 9000 computer from the movie 2001: A Space Odyssey. While computers that think and function with complete autonomy from human direction are still a ways off, machines with the capacity for a dramatically increased capacity to evaluate information, make choices and act on decisions have made remarkable progress over the last decade and established the foundation for the emerging technologies of machine learning. These technologies have broad applications in many fields, including defense and national security.

>>> Shaping Our Technological Future

Machine learning, developing processes that mimic human brain functioning, is patterned on how brain cells work in a neural network. This approach could be described as “data-driven,” providing inputs that became the basis for establishing cause and effect relationships, in a similar manner to how human brains create knowledge and make judgments.

At the outset, the development of machine learning paralleled the evolution of modern computers. In 1943, Warren McCulloch and Walter Pitts created the first mathematical model of a neural network. In 1949, Donald O. Hebb’s pioneering work on neuroscience, The Organization of Behavior advanced new concepts about the role of synaptic function in learning and memory. In 1952, IBM researcher Arthur Samuel introduced the term Machine Learning, applying the structure of neural networks to computer functioning.

Initial developments in AI found neural network research unpromising. In the late 1970s and 1980s, researchers looking to transform AI from science fiction into reality focused more on using logical, knowledge-based approaches to teach computers to “think.” Knowledge-based systems pair information with an “inference” engine (an if-then decisionmaking process) to create new knowledge and the capacity of machines to make independent choices. This method relied on algorithms, mathematical rules guide computers in calculations and problem-solving operations, and focused on algorithmic approaches to advanced computing. Machine learning split-off as a separate, struggling, ancillary field. It lagged in delivering breakthrough computer applications until the 1990s.

At the turn of the century, machine learning emerged as an important force in the evolution of computer technology. This resulted from a combination of developments in computer science and statistics—in particular, the growing capacity to process “big data,” (large amounts of information). New systems provided “algorithm boosts” to help networks make sense of unprecedented volumes of structured (information provided in a standardized format) and unstructured data. This more data-driven approach supplanted the knowledge-based systems developed earlier in the race to build “smarter” computers.

Today, machine learning capabilities are already ubiquitous in many widely deployed technologies including speech and facial recognition. When the final numbers were tallied, one market research report estimated that Machine Learning sales worldwide, “including software, hardware, and services, are expected to total $156.5 billion in 2020.” This would represent a more than 12 percent increase from the previous year—remarkable growth, given the drag on the global economy from the coronavirus pandemic. 

Assessing Future Applications

While machine learning may not be the technology that delivers the most advanced forms of AI in the future, its impact on contemporary developments in the field are unquestioned. And machine learning will no doubt be part of a family of technologies that delivers a next-generation of computer services. For example, pairing computers that can make better decisions with sensors that can collect more and better information will produce new capabilities, synchronizing the benefits of advancements in both technology fields. Machine learning will also accelerate the practical applications of new emerging technologies including quantum computing. In the next five years, machine learning-enabled technologies that can deliver reliable, scalable, cost-effective capabilities are going to tsunami the marketplace in many fields in the private and government sectors.

>>> America’s Intelligence Needs in the Face of Great-Power Competition

Where will this lead? One clear option, of course, is computers replacing human decisionmaking, but there are others as well. Thomas Malone, director of the MIT Center for Collective Intelligence, postulates that the governing work structure will be dominated by three types of human-machine collaboration:

All of these work structures might be employed in the national security and defense arena, but none would necessarily be dominant in warfighting. Here is why: machine learning technologies are more effective if they have a lot of data where they can learn well-established patterns, in bounded environments. A good example is traffic systems, where computers could learn from past commuter behavior to manage future traffic flows. However, the warfighting environment (as other national security areas of endeavor) tends to be highly complex and chaotic. Moreover, it can involve very big activities with very limited data sets—for example, the 9/11 attacks—a data set of one. 

The Future of Thinking

If neural networks and machine learning remain the dominant AI guiding technologies, the operational usefulness of AI in battle will be rather limited. If dramatic new AI ways of thinking emerge, that could well change, profoundly affecting future military competition. But if machine learning remains the guiding technology for the next quarter-century, AI won’t be universally applicable to all aspects of military and national security competition and it won’t fight our wars for us.

On the other hand, even if AI is not duking it out on the battlefield, there will no doubt be many military-related applications for AI. For example, machine learning tools can be used to create “deepfake” misinformation and propaganda materials, making the fog of war foggier than ever. 

Other factors will also impact how AI is adapted to military purposes. Clearly, developments in the private sector will create hosts of new capabilities (like autonomous vehicles) that can be adapted to defense applications. On the other hand, ethical constraints and international agreements may well impose restrictions on AI-warfare. Great power competition will also shape the future of warfare, with both the United States and China striving to make new advances in the field. Still, the bottom line remains: You can’t just assume that wars in the foreseeable future will be run and fought by AI.

This piece originally appeared in The National Interest https://nationalinterest.org/blog/reboot/why-artificial-intelligence-might-not-win-war-180076

Exclusive Offers

5 Shocking Cases of Election Fraud

Read real stories of fraudulent ballots, harvesting schemes, and more in this new eBook.

The Heritage Guide to the Constitution

Receive a clause-by-clause analysis of the Constitution with input from more than 100 scholars and legal experts.

The Real Costs of America’s Border Crisis

Learn the facts and help others understand just how bad illegal immigration is for America.