Given all the viral attention going to generative AI, chatbots, and Microsoft Bing (disclaimer: I still haven’t used Bing), today’s topic brings attention to AI from a different area. The good old game of war. Even though there has been some coverage on this, in my opinion, the issue of software and AI in war has not received the attention it needs. In particular, within the context of defending against Russian aggression, supporting Ukraine, and dealing with U.S-China tension, the discussion of AI has ramped up significantly and so have the money and resources involved. But is it getting enough critical attention?
A term called Cyber warfare already exists that usually describes “…the actions by a nation-state or international organization to attack and attempt to damage another nation's computers or information networks”. So this is not a new idea. And traditional military hardwares (tanks and missiles) are still in play. However, the use of AI (for instance, autonomous surveillance systems) is getting traction. The difference this time is silicon valley types are now involved. They are not your usual Lockheed Martin, Raytheon, Northrop Grumman. The names you need to get used to are from the Tolkien universe. I am talking about three companies: Palantir, Anduril and Istari.
Software is everywhere — from your toasters to John Deere tractors— and that is not news anymore. But software and AI is already defining the war game driven by these new companies. Here’s a few tag lines from these companies (with the explanation of the Tolkoien terms for laughs and giggles). Palantir is an indestructible crystal ball from The Lord of the Ring. And the company says “We build software that empowers organizations to effectively integrate their data, decisions, and operations”. Anduril is a fictional sword from middle earth and the mission of the company is “Transforming defense capabilities with advanced technology”. The new kid in block is Istari — a Tolkienian name for wizards. Former Google CEO Eric Schmidt backed company Istari will be such a wizard: “With Istari, simple and secure collaboration finally comes to software-enabled physical systems, empowering your team with internet-like possibilities”. There you go!
And the relationship between government and private enterprises is as old as war. That is also not going to change anytime soon. The below chart shows the contracts Palantir received from the U.S. government (see USASPENDING.gov an official open data source of federal spending information):
Similar contracts are happening in Australia. Palantir also diversifies. They recently secured a deal with the UK govt where the company will build a system for U.K National Health Service.
The players are changing. A new workforce that is getting trained to be programmers and computer scientists is now part of this new warfare industry. The code that delivers cute cat picture now probably help can destroy a village in some unnamed territory far away. Eric Schmitd’s new venture Istari that is “Building the Perfect AI War-Fighting Machine” will be at the forefront to hire this new workforce. He wants Pentagon to work like a silicon valley tech firm:
Expensive military hardware like a new tank undergoes rigorous testing before heading to the battlefield. A startup called Istari, backed by Eric Schmidt, the former CEO of Google and chair of Alphabet, reckons some of that work can be done more effectively in the metaverse.
Istari uses machine learning to virtually assemble and test war machines from computer models of individual components, such as the chassis and engines, that are usually marooned on separate digital drawing boards. It may sound dull, but Schmidt says it can bring a dose of tech industry innovation to US military engineering. "The Istari team is bringing internet-type usability to models and simulations," he says. "This unlocks the possibility of software-like agility for future physical systems—it is very exciting."
This new focus on software and AI has very interesting implications for the relationship with the government (in this case the U.S. govt). The Economist recently (Feb 2023) wrote about this:
Like a prime contractor, Anduril only sells to military customers. But unlike defence giants such as Lockheed Martin and Northrop Grumman, it does so while taking all the research-and-development (r&d) risk on its own shoulders.
Anduril has more focus on hardware but software and AI plays a big role. One of their components is called Lattice which uses “technologies like sensor fusion, computer vision, edge computing, and machine learning and artificial intelligence to detect, track, and classify every object of interest in an operator's vicinity”. These are active areas of research in every major university (at least in the U.S. and Europe). What does this mean for computer science research?
Maybe a parallel can be drawn from the role of physics during WWII. This except is from American Physical Society's website:
As soon as nuclear fission was discovered in Europe, it became apparent that if a way could be found to release its energy in a bomb, the course of the war would be altered. In America a number of physicists, many of European origin, worried that Hitler might acquire such a weapon and persuaded the normally pacifistic Albert Einstein to warn President Franklin D. Roosevelt. In an urgent letter dated August 2, 1939, he explained the danger by writing: "It is conceivable ... that extremely powerful bombs of a new type may thus be constructed." Einstein's letter did not have an immediate effect, but eventually helped to persuade the United States to begin the monumental task of building an atom bomb.
The United States made sure that they can build the bomb before anyone else. Are we looking into a similar or even deadlier type of destruction now aided by software and AI? If physicists during WWII failed to stop the atomic bomb, can scientists now stop the misuse of AI? Given the historical track record of science and warfare, I am not too optimistic. The threat of nuclear war is not gone but now it will be driven by a piece of code written by some random programmers and a company will sell this code to the governments for millions. And as we all know the war in Ukraine brought this issue to the forefront again.
Most recently at an American Physical Society meeting Alan Robock, a climatologist, painted the picture of the “nuclear winter”. He talks about scientific models to understand the disaster scenarios:
As the story goes, Robock was part of a group of U.S. and Soviet scientists in the 1980s who predicted the consequences of nuclear war using scientific models. The work introduced the public to the concept of nuclear winter. If countries were to use nuclear weapons again in war, the models predicted, the weapons would not only directly kill millions, but also cause firestorms whose smoke would block sunlight. The resulting climate change would trigger famine and death around the world.
I guess this is where things get tricky. The same scientific model (more and more of these models are being used in AI) that can understand the consequence can be used to play war strategy as well. There are arguments for this scenario. For example, guns don’t kill people, people do. Science and software are not to blame, but how we use them…etc. etc.. But as I mentioned here before artifacts have politics too. I don’t think enough attention being paid right now to what kind of software artifacts are we creating for the betterment of the world.
During WWII scientists warned that Hitler could have created a bomb with the available scientific knowledge. Now available scientific knowledge can help us build software to help win wars. The companies that are selling software now using the Ukraine war in a very similar way. I hope, I am not the only one seeing this connection. However, with my initial research, I have not seen much talk about these connections. Would love to know from others about research and reporting on the use of software and the parallel with the role of physics in WWII.
Alex Karp (founder of Palantir) recently wrote the following in Palantir’s official blog (responding to issues of ethics):
We acknowledge that the ethical challenges that the use of our software raises are significant. But the stakes could not be higher, and the costs of inaction are real.
Our collective experiment with democratic governance remains a remarkably fragile one and requires a shared commitment to something more than the self.
Those using our platforms in the defense and intelligence context, for reconnaissance, targeting, and other purposes, require the best weapons that we can build.
And we have never been inclined to wait on the sidelines while others risk their lives.
The software platforms that we have built are used by soldiers and intelligence operatives in the United States as well as allied nations in Europe and around the world.
Some companies find ways to work with our adversaries. We have beliefs and have chosen a side.
Those on the front lines, and in the arena, will bend the arc of history.
And our software is in the fight.
This sentence is chilling → “our software is in the fight”. And similar to the nuclear weapon argument, I see echoes of the past: “costs of inaction”, “we have chosen a side”. Right?
Government regulations are paying attention to AI and warfare but not sure if this is enough. For example, On 15 and 16 February, the government of the Netherlands hosted a meeting on “Responsible Artificial Intelligence in the Military Domain”.
Countries including the United States and China called Thursday for urgent action to regulate the development and growing use of artificial intelligence in warfare, warning that the technology “could have unintended consequences.”
A two-day meet in The Hague involving more than 60 countries took the first steps towards establishing international rules on the use of AI on the battlefield, aimed at establishing an agreement similar to those on chemical and nuclear weapons.
“AI offers great opportunities and has extraordinary potential as an enabling technology, enabling us among other benefits to make powerful use of previously unimaginable quantities of data and improving decision-making,” the countries said in a joint call to action after the meeting.
But they warned: “There are concerns worldwide around the use of AI in the military domain and about the potential unreliability of AI systems, the issue of human involvement, the lack of clarity with regards to liability and potential unintended consequences.”
Alex Karp was one of the participants at this event where he said, the conversation about AI in war has transformed from a "highly erudite ethics discussion" to a top concern since the start of the war in Ukraine. I have not seen anything in the program that deals with the private-public partnership and the profit motives of companies like Palantir, Anduril, Istari as the war rages on. War is good for business, right? Software and AI again could be seen as a neutral tools, but we do need to change our ethics framework that looks at software’s design, development and deployment when it is used in situations that can have dire consequences.
The Economist in the same article mentioned above put a positive tone to all this. It put these silicon valley companies as “David” and the traditional military-industrial complex as Goliath. How cute!
Palantir has tentatively started to achieve that status [profitable], but with a “dual-use” business model. It works for private clients as well as governments (albeit only ones friendly with America). Both on the battlefield and in business, its software cuts through the thickening fog of data to enable quick decision-making. Other dual-use firms are increasingly winning defence contracts. The Pentagon’s Defence Innovation Unit, set up in 2015, supports a big increase in the use of commercial technologies, such as ai, autonomy and integrated systems, to speed up the responsiveness to global threats.
Ukraine is a good testing ground. It is also a good simile. The struggle of tech Davids taking on America’s military-industrial Goliath is not dissimilar to tech-enabled Ukrainian troops battling the turgid might of Russia.
It also begs the question — how do we learn ethics as a scientist — either as physicists or computer scientists (the short answer is: we don’t). As most physics did not study ethics during the 1920s, and 1930s, they are not doing it either now:
Ethical violations in physics are just as prevalent now as they were 20 years ago, finds a survey of early-career physicists and graduate students — even though awareness of ethics policies has become more widespread.
And same for computer and data science:
In our study, we compared undergraduate data science curricula with the expectations for undergraduate data science training put forth by the National Academies of Sciences, Engineering and Medicine. Those expectations include training in ethics. We found most programs dedicated considerable coursework to mathematics, statistics and computer science, but little training in ethical considerations such as privacy and systemic bias. Only 50% of the degree programs we investigated required any coursework in ethics.
The bottom line is, AI and software in warfare need to be widely discussed before we have another atomic bomb disaster. But the discussion around ethics is tricky and fraught with ideological and political issues. These discussions take time, and so do reforming education and training programs. But war and profit wait for nobody!
I will leave you now with the words of Bob Dylan, from With God on Our Side:
Through many a dark hour
I've been thinkin' about this
That Jesus Christ was
Betrayed by a kiss
But I can't think for you
You'll have to decide
Whether Judas Iscariot
Had God on his side.So now as I'm leavin'
I'm weary as Hell
The confusion I'm feelin'
Ain't no tongue can tell
The words fill my head
And fall to the floor
That if God's on our side
He'll stop the next war