

Discover more from The S.A.D Newsletter
Last week, the big AI news was the resignation of Geoff Hinton, often referred to as the "godfather of AI," from Google, as well as his subsequent interview in the New York Times. This departure and public criticism come in the wake of the success of OpenAI and tools like ChatGPT, which have become increasingly mainstream. Hinton's involvement in the field dates back to 1972, when he began working on computer science problems that would eventually evolve into what we now know as neural networks. In 2012, he sold his company, which specialised in image classification, to Google, and one of the founders of OpenAI, Ilya Sutskever, was his graduate student. There has already been extensive commentary on Hinton's actions (see here from Gary Marcus and a report in The Guardian), so I won't delve into the entire debate here. However, it is worth point out that several ex-Googlers have mentioned that Hinton remained silent on similar issues a few years ago. So all the fuss now?
Another prominent ex-Googler, Margaret Mitchell, who co-led the company’s ethical AI team, criticized Hinton for not denouncing Google’s 2020 firing of her coleader Timnit Gebru, a leading researcher who had spoken up about AI’s risks for women and people of color.
“This would’ve been a moment for Dr. Hinton to denormalize the firing of [Gebru],” Mitchell tweeted on Monday. “He did not. This is how systemic discrimination works.”
Gebru, who is Black, was sacked in 2020 after refusing to scrap a research paper she coauthored about the risks of large language models to multiply discrimination against marginalized people. “White supremacist and misogynistic, ageist, etc. views are overrepresented in the training data . . . [and] setting up models trained on these datasets to further amplify biases and harms,” the paper noted, could quickly lead to a “feedback loop.” The paper also pointed out the engines’ outsize carbon emissions, something that “doubly punishes marginalized communities” in the path of climate change.
Hinton’s main points from the interviews are following. Excerpt from The Guardian:
This development, he argues, is an unavoidable consequence of technology under capitalism. “It’s not that Google’s been bad. In fact, Google is the leader in this research, the core technical breakthroughs that underlie this wave came from Google, and it decided not to release them directly to the public. Google was worried about all the things we worry about, it has a good reputation and doesn’t want to mess it up. And I think that was a fair, responsible decision. But the problem is, in a capitalist system, if your competitor then does do that, there’s nothing you can do but do the same.”
He decided to quit his job at Google, he has said, for three reasons. One was simply his age: at 75, he’s “not as good at the technical stuff as I used to be, and it’s very annoying not being as good as you used to be. So I decided it was time to retire from doing real work.” But rather than remain in a nicely remunerated ceremonial position, he felt it was important to cut ties entirely, because, “if you’re employed by a company, there’s inevitable self-censorship. If I’m employed by Google, I need to keep thinking, ‘How is this going to impact Google’s business?’ And the other reason is that there’s actually a lot of good things I’d like to say about Google, and they’re more credible if I’m not at Google.”
and from the NYT:
Down the road, he is worried that future versions of the technology pose a threat to humanity because they often learn unexpected behavior from the vast amounts of data they analyze. This becomes an issue, he said, as individuals and companies allow A.I. systems not only to generate their own computer code but actually run that code on their own. And he fears a day when truly autonomous weapons — those killer robots — become reality.
“The idea that this stuff could actually get smarter than people — a few people believed that,” he said. “But most people thought it was way off. And I thought it was way off. I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”
There are several key points to consider in this discussion. Firstly, Geoff Hinton is a highly influential and respected figure in the field of AI. His dissenting views are not new; he previously worked at Carnegie Mellon University but relocated to Canada because he was unwilling to accept Department of Defense funding. In the early 1980s and 1990s, much of the major computer science and AI research was funded by the U.S. military. Hinton has always been vocal about the use of AI for war, and while his dissent may be late, the recent media attention and regulatory scrutiny may yield positive results. As he stated in his interviews, thoughtful and effective regulations may be the only way to prevent the harmful use of AI. Additionally, Hinton's concern about the internet being flooded with false information and the average person's inability to discern the truth predates the emergence of OpenAI and ChatGPT. In fact, classification-based assessment and decision-making have long been identified as a problem, and Hinton's work has been fundamental for building such classification systems.
Furthermore, the role of physics during WWII and the development of the atomic bomb is reminiscent of the current situation with AI. Hinton alludes to this in his interview with the NYT and brings up Robert Oppenheimer, highlighting the potential dangers of AI and the need for responsible oversight.
Dr. Hinton said that when people used to ask him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the U.S. effort to build the atomic bomb: “When you see something that is technically sweet, you go ahead and do it.”
He does not say that anymore.
It's important to note that AI and the atomic bomb are not equivalent, and historical trends do not necessarily repeat themselves. However, as a skeptic, it's hard not to question why Hinton didn't speak out earlier, given his position as a respected AI pioneer.
As a side note, it is also interesting to consider Hinton's family and academic lineage. Hinton’s great-great grandfather was George Boole (as in the Boolean logic). Another atomic connection is Hinton’s cousin, Joan Hinton who was the only female nuclear physicist and one of the few women scientists who worked for the Manhattan Project. She later dissented to China.
When comparing AI to the atomic bomb threat, we should be aware of the limitations of historical analogies. While scientists' objections to the atomic bomb did not prevent the US from developing it, this doesn't necessarily mean that we should assume a similar outcome with AI. Also, it's important to remember that there are fundamental and sometimes flawed concepts underlying the hype around AI, such as the idea that computers can learn like the brain. This is still a controversial notion, as we have yet to fully understand how the brain works, and there are debates about whether the brain operates like a computer. Thirty to forty years ago, one of the major drivers behind AI and computer science research was to make computers function like the human brain, even without a complete understanding of how the brain works. We do not seem to talk about this enough (see this 2016 article from Robert Epstein: “Your brain does not process information, retrieve knowledge or store memories. In short: your brain is not a computer”
While there is certainly cause for concern around the potential dangers of AI, it's also crucial to have informed discussions about the underlying assumptions and limitations of the technology. In this sense, perhaps we need better analogies and more nuanced debates about the implications of AI.
The truth is, we find ourselves already in this generative hell — one from which there seems to be no escape. We may concede that these bots and AI engines lack sentience, and that they cannot be held morally accountable yet we still hear tales of their remarkable feats and revel in their accomplishments. But the real danger, the so-called "existential threat" as it has been called, lies in our ability to navigate this brave new world without sacrificing the things that make us human - the ability to read genuine works of literature, to listen to music created by a living, breathing, suffering flawed human being.
It's not unlike the struggle against the atomic bomb, a battle fought on the frontiers of science and morality. We may hope for a world without nuclear weapons, but it has yet to materialise. Perhaps now, with the challenge of AI before us, we can strive for a better outcome. We must find a way to temper the enthusiasm for this technology with a sober recognition of its limitations, and ensure that it does not undermine the things that make us who we are. Only then can we hope to find a path forward that truly benefits us all.