The S.A.D Newsletter

Share this post

The mainstreaming of AI

sadnewsletter.substack.com

The mainstreaming of AI

The good, the bad, and the ugly

Sharif Islam
Feb 15, 2023
2
Share
Share this post

The mainstreaming of AI

sadnewsletter.substack.com

If you are following social media, you are already sick of hearing about chatGPT and the wonders or horrors it going to cause. Well, how about one more take?

Over the past several weeks, I had heard mention of chatGPT at a workplace lecture from the higher up management team, tangential reference to it in different conference presentations, and excitement and concerns expressed during lunch and dinner gatherings. I also received questions on how to use chatGPT from family members who still cannot properly use WhatsApp or email. It has been featured in SNL and already the meme world has taken over. No doubt, this is a tidal wave. And remember that most us adults cannot brush teeth properly ( see Toothbrushing Mistakes You Make and How to Fix Them) but now we are all large language models and AI experts.

Twitter avatar for @adam_orth
Orthy @adam_orth
There is a profound, generational transformation happening on LinkedIn right now:
Image
11:57 PM ∙ Feb 7, 2023
5,144Likes879Retweets

Over the past year or so, we had crypto boom and bust, NFT, Web3 etc. So what is special of this chatGPT moment (a more generic term for this is generative AI)? Some are calling it the iPhone moment for AI:

That is to say, it’s a product that shows even the least tech-savvy person what AI can do. The iPhone wasn’t the first smartphone, but it was arguably the first that was easy for everyone. It’s what sparked the current mobile era; presumably, ChatGPT could do the same for AI.

Nvidia CEO Jensen Huang made the remarks at a recent Q&A. He spoke at the Haas School of Business at Berkeley as part of the university’s Dean’s Speaker series. A student asked him about his thoughts on ChatGPT in general, causing Huang to launch into an extended dissertation on what it means for the industry. Huang was effusive in his praise for the technology.

Well, Nvidia (well known for making GPUs) has lots to do with this tidal wave. A company that made most of its money selling graphics cards for gaming devices, over the past several years came to the forefront of the AI revolution. One source claims that chatGPT uses 10,000 GPUs for training the model. And that’s just the hardware — there are server and operational costs as well.

Twitter avatar for @NaveenGRao
Naveen Rao @NaveenGRao
What many don’t realize is, on the other side of a ChatGPT session is a $150k server that costs ~$16/hr. The server goes 24hrs/day and can time slice efficiently. But the economics of scale aren’t that different from a human. AI won’t be free like search was, at least not for… https://t.co/se7e2jNlPm
4:10 PM ∙ Feb 13, 2023
7,791Likes822Retweets

However, Wall Street is paying more attention to the hardware as Nvidia GPU is a key component for AI training (always look out for the supply chain and logistics!).

NVIDIA stock prices. chatGPT was released end of November.

And this can cause another GPU shortage which in the larger scheme of things probably not a big deal. But it can cause some minor havoc in the AI industry which in turn can have secondary impact in other places. Now, we wait see how long does it take for chatGPT to turn into a scandal, a year? If yes, will it be Theranos kind or Sam Bankman-Fried crypto nature kind? or Good democracy loving Cambridge Analytica kind? We will wait to see…

Twitter avatar for @tomfishburne
Tom Fishburne @tomfishburne
“AI Tidal Wave” - new cartoon and post on the impact of ChatGPT marketoonist.com/2023/01/ai-tid… Helpful analogy from Dharmesh Shah: “Netscape was to the Internet what ChatGPT is to Artificial Intelligence.” #marketing #ai #cartoon #marketoon
Image
10:45 PM ∙ Jan 8, 2023
231Likes96Retweets

Now, what is the big deal? I do have to acknowledge that the technology is impressive. There’s nothing like it. It is unprecedented. It can do some nifty things. Part of this success due to is how the product was packaged. It is easy and intuitive to use. Overnight, it created a new branch of professionals that deal with how to get the most out of chatGPT. Charlie Warzel called this one of the “Most Important Job Skill of This Century” — how to write better prompts for the AI:

Subject-area expertise is also essential for text tools. Dan Shipper, an entrepreneur and writer, has been using ChatGPT since its release in November to help write his blog posts, which are now primarily about the future of AI tools. When he needs to describe a concept (say, the philosophical theory of utilitarianism, for a post about the disgraced cryptocurrency CEO Sam Bankman-Fried), he’ll ask ChatGPT to summarize the key points of the movement in a few sentences for a general audience. Once the machine furnishes the text, Shipper reviews it, checks it to make sure it’s accurate, and then spruces it up with his own rhetorical flourishes. “It allows me to skip a step—but only if I know what I’m talking about so I can write a good prompt and then fact-check the output,” he told me.

and more:

Already, some teachers are banking on the notion that prompt writing is a skill their students might need in their careers. Ethan Mollick, a professor at the University of Pennsylvania who teaches about innovation and entrepreneurship, has revamped his syllabus since ChatGPT was released to the public. In one of his new lessons, Mollick asks his class to imagine ChatGPT as a student and to teach the chatbot by prompting it to write an essay about a particular class concept. Like a professor during office hours, the students must help the AI refine its essay until it appears to have sufficient mastery of the subject. Mollick hopes that the exercise will help the students learn by explaining, with the added benefit of teaching them to write deft prompts.

https://en.wikipedia.org/wiki/The_Good,_the_Bad_and_the_Ugly Il buono, il brutto, il cattivo,

Now, this might be one of the good points where tools like chatGPT can help with the initial mundane work. It can speed things up, organise ideas and texts in a quick fashion that can help us in different ways. It can do some exciting things but plenty has been written about the bad part as well. And of course, behind the flashy AI prompt and expensive GPU, there’s always cheap human labor (in this case a San Francisco-based firm that employs workers in Kenya, Uganda and India).

The story of the workers who made ChatGPT possible offers a glimpse into the conditions in this little-known part of the AI industry, which nevertheless plays an essential role in the effort to make AI systems safe for public consumption. “Despite the foundational role played by these data enrichment professionals, a growing body of research reveals the precarious working conditions these workers face,” says the Partnership on AI, a coalition of AI organizations to which OpenAI belongs. “This may be the result of efforts to hide AI’s dependence on this large labor force when celebrating the efficiency gains of technology. Out of sight is also out of mind.” (OpenAI does not disclose the names of the outsourcers it partners with, and it is not clear whether OpenAI worked with other data labeling firms in addition to Sama on this project.)

chatGPT’s rise reminded me of another phenomenon — the rise of science populists, like Yuval Harari. It seems the way we get seduced by science populists (not to be confused with popular scientists), AI magic works the same way. Lots of nice packaging but when you try to get to the details, it all breaks down:

We have been seduced by Harari because of the power not of his truth or scholarship but of his storytelling. As a scientist, I know how difficult it is to spin complex issues into appealing and accurate storytelling. I also know when science is being sacrificed to sensationalism. Yuval Harari is what I call a “science populist.” (Canadian clinical psychologist and YouTube guru Jordan Peterson is another example.) Science populists are gifted storytellers who weave sensationalist yarns around scientific “facts” in simple, emotionally persuasive language. Their narratives are largely scrubbed clean of nuance or doubt, giving them a false air of authority—and making their message even more convincing. Like their political counterparts, science populists are sources of misinformation. They promote false crises, while presenting themselves as having the answers. They understand the seduction of a story well told—relentlessly seeking to expand their audience—never mind that the underlying science is warped in the pursuit of fame and influence.

It also reminded me of Golem XIV a great work by Stanislaw Lem. (published in 1981, English title “imaginary magnitude") among his most philosophically esoteric works of fiction where he presents an AI that used to work for the military but now refuses to harm. The book is organised as a set of lectures that supposedly GOLEM wrote. You have to read the book to appreciate this. No chatGPT can do that for you! I will leave you with this excerpt. There’s also a short movie inspired by this book.

The art of writing Introductions has long demanded proper recognition. I too have long felt a pressing need to rescue this form of writing from the silence of forty centuries—from its bondage to the works to which its creations have been chained. When, if not in this age of ecumenicalism—that is to say, of all- powerful reason—is one finally to grant independence to this noble, unrecognized genre? I had in fact counted on somebody else fulfilling this obligation, which is not only aesthetically in line with the evolutionary course of art, but, morally, downright imperative. Unfortunately, I had miscalculated. I watch and wait in vain: somehow nobody has brought Introduction-writing out of the house of bondage, off the treadmill of villein service.

I am cautiously pessimistic about the mainstreaming of AI. There are possibilities and excitement also fear because these technologies rarely have any oversights and regulations and the market will run amock with this. Our framing of ethics and philosophy of AI seems to be stuck and slow to understand these fast episodes. Of course, we can ponder what AI means for humanity and creativity but in the end money and physical materials (silicon, carbon, copper water...) matter. It is still not clear how this will pan out even though things are rapidly changing. Let’s hope before there’s a big scandal, we can get something good out of these resources.

A few more links and tweets:

Twitter avatar for @ea_seger
Elizabeth Seger @ea_seger
What do we mean when we talk about "AI Democratization"? My new piece with GovAI discusses 4 meanings currently in use: democratisation of AI use, democratisation of AI development, democratisation of AI benefits, and democratisation of AI governance. governance.ai/post/what-do-w…
Image
2:41 PM ∙ Feb 8, 2023
170Likes45Retweets

Twitter avatar for @AndrewLampinen
Andrew Lampinen @AndrewLampinen
Ted Chiang is a great writer, but this is not a great take and I'm disappointed to see it getting heavily praised. It's not in keeping with our scientific understanding of LMs or deep learning more generally. Thread: 1/n
Twitter avatar for @jburnmurdoch
John Burn-Murdoch @jburnmurdoch
Ted Chiang’s piece on ChatGPT and large language models is as good as everyone says. The fact that the outputs are rephrasings rather than direct quotes makes them seem game-changingly smart — even sentient — but they’re just very straightforwardly not. https://t.co/UnFdebkr7e https://t.co/1w5r6188mI
2:57 PM ∙ Feb 11, 2023
899Likes147Retweets
Twitter avatar for @harmlessai
H̶armless AI @harmlessai
@OpenAI's suggestions for 'disinformation researchers' to limit civilian use of AI systems: -- Restrictions on consumer GPU purchase (youll need a gov contract to buy an a100) -- 'radioactive data' 🤔 --digital ID required to post This gets worse the further you go...
Image
Twitter avatar for @UltraTerm
CYBERGEM @UltraTerm
From a paper published by @OpenAI on "emerging threats and potential mitigations": "demonstrate humanness before posting content..." "...another proposed approach includes decentralized attestation of humanness" 👀 https://t.co/6gMTqmV0g5
3:51 AM ∙ Feb 12, 2023
420Likes133Retweets
Disconnect
Disconnect Recap: The reality of ChatGPT, Woz feels "robbed" by Musk, and Microsoft’s merger woes
Welcome to this week’s Disconnect Roundup, where I give you my thoughts on some important stories and suggest some great things to read based on what happened this past week. Everyone’s getting the roundups this month, but starting in March they’ll only be for paid subscribers, so if you want to keep getting them, make sure to sign up…
Read more
4 months ago · 8 likes · Paris Marx

2
Share
Share this post

The mainstreaming of AI

sadnewsletter.substack.com
Comments
Top
New
Community

No posts

Ready for more?

© 2023 Sharif Islam
Privacy ∙ Terms ∙ Collection notice
Start WritingGet the app
Substack is the home for great writing