Horse and astronaut
Our current state of AI?
“What nearly everyone got wrong about DALL-E & Google’s Imagen, and why when it comes to AI hype, you still can't believe what you read”
A few notes here:
Millions of dollars now being poured into AI research and deployment. This will not slow down (despite scandals) in the near future.
Our regulations and government policy are yet to catch up to tackle this momentum.AI/ML research at places like Google and OpenAI is based on spending absurd amounts of money, compute, and electricity to brute force arbitrary improvements. The inequality, the trade offs, the waste—all for incremental progress toward a bad future. https://t.co/wbySsnSHyS'I don't really trust papers out of "Top Labs" anymore' https://t.co/KgRKpCvlnD https://t.co/t5ORZ2DmM2Leon Derczynski 🏡🌱 @LeonDerczynski
Our focus and priorities will be around shiny astronauts and horses. Sorry to be pessimistic. However, it will probably take another year or two for this AI hype to cool down.
People like Gary Marcus and others are critically looking at these hypes and pointing out the crux of the matter. We should listen to them more. Maybe that will create momentum for an AI that we can trust, an AI that can benefit our society.
The problem is not that AI can draw all these beautiful and detailed images. The problem is that “we” (the companies, media, investors, users, academia…..we are all in it), take these accomplishments as a sign of “intelligence”. It is easy to be swept away by snake oil merchants of AI and technology. But we need to try harder. See my post on this: “From Snake Oil to Theranos.Some responses to Gary's valid criticism look like when a religious person is told that God doesn't exist.Horse rides astronaut: What nearly everyone got wrong about @GoogleAI's #Imagen (and DALL-E), and why when it comes to AI hype, you still can't believe what you read. A #longread into evaluating claims about deep learning, with a cameo from Clever Hans. https://t.co/ZacV6jKxJ6Gary Marcus 🇺🇦 @GaryMarcus
Here’s another quote from Gary’s piece:
Since we know that Imagen can draw images of horses riding astronauts (if the instructions are specific enough) we know that the failure to draw a horse riding an astronaut given the prompt “a horse riding an astronaut” can only have one explanation: the system doesn’t understand what the prompt “a horse riding an astronaut” means. To borrow a phrase from DeepMind’s Nando de Freitas, it’s “Game Over” for that particular hypothesis.
Perhaps with enough context and training data, the system can figure out the difference between “horse riding an astronaut” versus “astronaut riding a horse”. Ok, fine. But we need a healthy, robust tech ecosystem where innovative AI applications and critique can coincide. Currently, this is at the hand of a few big companies. Are we ready to decide on the next course of AI technology that will impact humanity such a way? I hope not.
That is all for today.