In her recent State of the Union address on September 13, 2023, Ursula von der Leyen, the EU President, commented that Artificial Intelligence (AI) poses an extinction-level risk to humanity, comparable to threats like pandemics and nuclear war. She suggested that Europe could take a leading role in creating a global framework for AI regulations, similar to the Intergovernmental Panel on Climate Change (IPCC) for climate regulations. This assertion has sparked discussions, memes, and debates in the AI community and beyond.
Today’s note provides context to this statement by the EU president. It’s crucial to note that these discussions are not new, but often the nuances get lost when people become excited about the possibilities or potential harm of AI.
Some of the hype surrounding AI safety and existential risks originates from the highly problematic "longtermism" and "effective altruism" camp (which I discussed here). Individuals involved in these movements have revived eugenics and racist ideas within a narrow technological vision. Adherents to these ideas lack diversity, predominantly consisting of white, male, and advantaged individuals, often stemming from Silicon Valley and Libertarian backgrounds.
So, why the EU State of the Union address talking about AI? AI, particularly generative AI, has seen exponential growth in recent years, largely due to advancements in GPU technology and large language models. The rapid development and attention AI has garnered, including being featured in a State of the Union address, mark a significant departure from traditional discussions focused on “digital” and “cyber”.
Based on such hype, the European Union has proposed the AI Act (as of this writing, the it is in draft status) to ensure the safety, reliability, accountability, and respect for human rights in AI systems, categorised by risk levels. However, the governance and regulation of AI have been complex and compared to nuclear regulations, suggesting the need for nuanced approaches (see here and here).
Now, back to the imminent doom and gloom. A few more points. First, there is skepticism regarding the pace of AI and Artificial general intelligence (AGI) development, with claims that scaling AI models may not continue to yield significant progress. AGI is a claim that AI will become or is becoming as “intelligent” as a human being. If and when AGI is achieved, it could lead to a singularity, which is a point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilisation (which includes both glory and doom).
Concerns about AI going rogue or developing its own agency are criticised for lacking empirical evidence. Instead, the focus should be on defending against AI system vulnerabilities, such as hacking and manipulation, data provenance, explainability, trust etc. These critiques are also supporting the ideas of “slow” or “boring” AI as a way to mitigate the hypes and the criti-hypes.
Another interesting point is that big companies favour global regulations, partly because they are at the table helping to draft these regulations in collaboration with the EU and the U.S. government. You can see the latest news regarding the AI White House summit (behind closed doors) where some of big players pledged towards AI regulations. Palantir is one of them:
Imagine the use of AI for information warfare, as Palantir CEO Alex Karp harped on during a February summit on AI-military tech. The company is already facilitating its data analytics software for battlefield targeting for the Ukrainian military, Karp reportedly said. Still, the CEO did mention that there needs to be “architecture that allows transparency on the data sources,” which should be “mandated by law.” Of course, Palantir hasn’t exactly been open about its own data for any of its many military contracts.
In an email statement to Gizmodo, Palantir USG president Akash Jain said “Today, Palantir, along with other leading AI companies, made a set of voluntary commitments to advance effective and meaningful AI governance, which is essential for open competition and maintaining US leadership in innovation and technology.” The company did not respond to Gizmodo’s questions regarding its ongoing military and government AI contracts.
Also see my previous post on AI and warfare:
Concentrating power in a few AI companies, despite substantial investments in AI risk mitigation, raises concerns about the strategic influence of certain organisations. Regulation, which was initially envisioned to foster innovation, can also be turned into a vehicle to stifle competition.
It’s important to recognise that public discourse and opinions play a significant role in shaping AI’s future. Arguments for risk can contribute to a self-fulfilling prophecy by drawing attention to the idea of AI making consequential decisions independently. To address these risks, we must consider when and how algorithms should be used to enhance human decision-making, weighing ethical and societal implications.
While concerns about AI’s harm and risks are valid, the argument should be grounded in data and evidence. The emphasis should shift from hypothetical “doom” scenarios to practical issues such as AI system vulnerabilities and the impact of AI on decision-making processes.
Further reading:
Silicon Valley’s vision for AI? It’s religion, repackaged (Vox, Sept 7, 2023)
How Much Will the Artificial Intelligence Act Cost Europe? (Information Technology & Innovation Foundation, July 26, 2021)
EU legislators must close dangerous loophole in AI Act (Algorithm Watch)
On Global AI Governance (evidence for the Office of the UN Secretary General's Envoy on Technology (Joanna Bryson, blogpost)
Palantir, the all-seeing US tech company, could soon have the data of millions of NHS patients. My response? Yikes! (The Guardian, Jun 14, 2022)
Sharif, this is a really cool newsletter! I’m breaking my own rule had following you for interest for my day job (IT) and not my Substack persona (artsy 😂😂).