If you are following social media, you are already sick of hearing about chatGPT and the wonders or horrors it going to cause. Well, how about one more take?
Over the past several weeks, I had heard mention of chatGPT at a workplace lecture from the higher up management team, tangential reference to it in different conference presentations, and excitement and concerns expressed during lunch and dinner gatherings. I also received questions on how to use chatGPT from family members who still cannot properly use WhatsApp or email. It has been featured in SNL and already the meme world has taken over. No doubt, this is a tidal wave. And remember that most us adults cannot brush teeth properly ( see Toothbrushing Mistakes You Make and How to Fix Them) but now we are all large language models and AI experts.
Over the past year or so, we had crypto boom and bust, NFT, Web3 etc. So what is special of this chatGPT moment (a more generic term for this is generative AI)? Some are calling it the iPhone moment for AI:
That is to say, it’s a product that shows even the least tech-savvy person what AI can do. The iPhone wasn’t the first smartphone, but it was arguably the first that was easy for everyone. It’s what sparked the current mobile era; presumably, ChatGPT could do the same for AI.
Nvidia CEO Jensen Huang made the remarks at a recent Q&A. He spoke at the Haas School of Business at Berkeley as part of the university’s Dean’s Speaker series. A student asked him about his thoughts on ChatGPT in general, causing Huang to launch into an extended dissertation on what it means for the industry. Huang was effusive in his praise for the technology.
Well, Nvidia (well known for making GPUs) has lots to do with this tidal wave. A company that made most of its money selling graphics cards for gaming devices, over the past several years came to the forefront of the AI revolution. One source claims that chatGPT uses 10,000 GPUs for training the model. And that’s just the hardware — there are server and operational costs as well.
However, Wall Street is paying more attention to the hardware as Nvidia GPU is a key component for AI training (always look out for the supply chain and logistics!).
And this can cause another GPU shortage which in the larger scheme of things probably not a big deal. But it can cause some minor havoc in the AI industry which in turn can have secondary impact in other places. Now, we wait see how long does it take for chatGPT to turn into a scandal, a year? If yes, will it be Theranos kind or Sam Bankman-Fried crypto nature kind? or Good democracy loving Cambridge Analytica kind? We will wait to see…
Now, what is the big deal? I do have to acknowledge that the technology is impressive. There’s nothing like it. It is unprecedented. It can do some nifty things. Part of this success due to is how the product was packaged. It is easy and intuitive to use. Overnight, it created a new branch of professionals that deal with how to get the most out of chatGPT. Charlie Warzel called this one of the “Most Important Job Skill of This Century” — how to write better prompts for the AI:
Subject-area expertise is also essential for text tools. Dan Shipper, an entrepreneur and writer, has been using ChatGPT since its release in November to help write his blog posts, which are now primarily about the future of AI tools. When he needs to describe a concept (say, the philosophical theory of utilitarianism, for a post about the disgraced cryptocurrency CEO Sam Bankman-Fried), he’ll ask ChatGPT to summarize the key points of the movement in a few sentences for a general audience. Once the machine furnishes the text, Shipper reviews it, checks it to make sure it’s accurate, and then spruces it up with his own rhetorical flourishes. “It allows me to skip a step—but only if I know what I’m talking about so I can write a good prompt and then fact-check the output,” he told me.
and more:
Already, some teachers are banking on the notion that prompt writing is a skill their students might need in their careers. Ethan Mollick, a professor at the University of Pennsylvania who teaches about innovation and entrepreneurship, has revamped his syllabus since ChatGPT was released to the public. In one of his new lessons, Mollick asks his class to imagine ChatGPT as a student and to teach the chatbot by prompting it to write an essay about a particular class concept. Like a professor during office hours, the students must help the AI refine its essay until it appears to have sufficient mastery of the subject. Mollick hopes that the exercise will help the students learn by explaining, with the added benefit of teaching them to write deft prompts.
Now, this might be one of the good points where tools like chatGPT can help with the initial mundane work. It can speed things up, organise ideas and texts in a quick fashion that can help us in different ways. It can do some exciting things but plenty has been written about the bad part as well. And of course, behind the flashy AI prompt and expensive GPU, there’s always cheap human labor (in this case a San Francisco-based firm that employs workers in Kenya, Uganda and India).
The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.” (OpenAI does not disclose the names of the outsourcers it partners with, and it is not clear whether OpenAI worked with other data labeling firms in addition to Sama on this project.)
chatGPT’s rise reminded me of another phenomenon — the rise of science populists, like Yuval Harari. It seems the way we get seduced by science populists (not to be confused with popular scientists), AI magic works the same way. Lots of nice packaging but when you try to get to the details, it all breaks down:
We have been seduced by Harari because of the power not of his truth or scholarship but of his storytelling. As a scientist, I know how difficult it is to spin complex issues into appealing and accurate storytelling. I also know when science is being sacrificed to sensationalism. Yuval Harari is what I call a “science populist.” (Canadian clinical psychologist and YouTube guru Jordan Peterson is another example.) Science populists are gifted storytellers who weave sensationalist yarns around scientific “facts” in simple, emotionally persuasive language. Their narratives are largely scrubbed clean of nuance or doubt, giving them a false air of authority—and making their message even more convincing. Like their political counterparts, science populists are sources of misinformation. They promote false crises, while presenting themselves as having the answers. They understand the seduction of a story well told—relentlessly seeking to expand their audience—never mind that the underlying science is warped in the pursuit of fame and influence.
It also reminded me of Golem XIV a great work by Stanislaw Lem. (published in 1981, English title “imaginary magnitude") among his most philosophically esoteric works of fiction where he presents an AI that used to work for the military but now refuses to harm. The book is organised as a set of lectures that supposedly GOLEM wrote. You have to read the book to appreciate this. No chatGPT can do that for you! I will leave you with this excerpt. There’s also a short movie inspired by this book.
The art of writing Introductions has long demanded proper recognition. I too have long felt a pressing need to rescue this form of writing from the silence of forty centuries—from its bondage to the works to which its creations have been chained. When, if not in this age of ecumenicalism—that is to say, of all- powerful reason—is one finally to grant independence to this noble, unrecognized genre? I had in fact counted on somebody else fulfilling this obligation, which is not only aesthetically in line with the evolutionary course of art, but, morally, downright imperative. Unfortunately, I had miscalculated. I watch and wait in vain: somehow nobody has brought Introduction-writing out of the house of bondage, off the treadmill of villein service.
I am cautiously pessimistic about the mainstreaming of AI. There are possibilities and excitement also fear because these technologies rarely have any oversights and regulations and the market will run amock with this. Our framing of ethics and philosophy of AI seems to be stuck and slow to understand these fast episodes. Of course, we can ponder what AI means for humanity and creativity but in the end money and physical materials (silicon, carbon, copper water...) matter. It is still not clear how this will pan out even though things are rapidly changing. Let’s hope before there’s a big scandal, we can get something good out of these resources.
A few more links and tweets: