Apr. 8 - There’s not much talk in Denmark about the total solar eclipse that’s been dominating the American news cycle all weekend. When I say “not much talk,” I mean none.
I mean the phrase “path of totality” is only met with blank stares.
NASA has an entire page dedicated the path of totality, including an interactive map:
Americans are very excited. The media are giving the event the kind of flood-the-zone coverage usually reserved for Trump tweets and Taylor Swift’s romantic life.
The eclipse won’t reach America until about 20:00 Danish time, so I assume Danish news media will latch onto American news coverage of the phenomenon this evening.
It’s a nice story because it’s harmless and there’s nothing political about it—yet. The politics will come, I’m sure, because they always do, but for now it’s nice to see Americans genuinely united and excited about something bigger than themselves.
It’s an important reminder that such things are still possible.
But enough about astronomical phenomena. Let’s talk about Kean Birch.
Kean Birch is a Canadian academic. Specifically, he’s the Director of the Institute for Technoscience & Society and Professor in the Graduate Program in Science and Technology Studies at York University, Canada.
(“Damn it, I’m the DITS from York, not the ditz from York!”)
He has a Canadian radio program and a private podcast that I’ve never listened to.
He’s also the author of one of the stupidest takes on generative AI I’ve seen—and given that “stupid takes on AI” is an actual genre, that’s saying something.
It was published last Thursday (April 4) in the Toronto Globe and Mail and was entitled, “Generative AI is simply a waste of our time and money.”
I’d stumbled over that headline on Google News and, sucker that I am, couldn’t resist the transparent clickbait.
More fool me.
Like a well-trained school boy, Birch lays it all out for us up front in a simple declarative topic sentence:
Increasingly, generative AI seems like a waste of our collective time and money. While generative AI technologies, like ChatGPT, have some playful uses, they potentially come with enormous social costs and limited social benefit.
“Some playful uses.”
I know anecdotes are not data, but I myself have experienced enough of a productivity boost from generative AI, both personally and professionally, that this rang false for me.
Here’s something more like data from CIO magazine:
According to a research report IDC released in November, based on a survey of over 2,100 business leaders and decision makers with responsibility for AI transformation, 71% of companies already using AI are seeing returns on their AI investments within 14 months, averaging $3.50 for every $1 spent.
That’s a 350% return on investment for 71% of companies—and within 14 months. Playful!
And here’s the Harvard Business Review:
In a study conducted by the National Bureau of Economic Research (NBER), it was found that customer support agents using a generative pre-trained transformer (GPT) AI tool saw a nearly 14% increase in their productivity.
Their playfulness, they mean.
Birch then reminds his readers what generative AI is, emphasizing that it’s “not an autonomous, intelligent system, able to think and decide like we do” but “a mimic of human action, parroting back our words and images.”
“It doesn’t think,” he reminds us, “it guesses—and often quite badly in what is termed AI hallucination.”
Fair enough—that’s all true, but also superfluous because his five arguments against generative AI have nothing to do with its technical shortcomings.
So here are the five arguments he makes to justify his thesis that generative AI is all a big waste of time and money:
Argument 1: Generative AI will be bad for the climate.
…the more AI we deploy, the more computing capacity we need. Not only does this take computing capacity away from other, potentially more useful activities, it requires an enormous amount of energy. These environmental costs are well-known, but they will get significantly worse as AI spreads.
He’s correct that AI is fueling a surge in demand for computing capacity, and that the increased capacity will demand more energy, and that increased energy generation will have an environmental impact.
But you can’t assess a technology’s overall impact by looking only at its costs—if you did, we’d have strangled the internal combustion engine in its crib. At least 100 million human lives have been lost in car accidents going back to 1900. Cars have produced massive amounts of pollution, which has in turn produced human illness and suffering. The manufacture of cars (and motorcycles, trucks, planes, helicopters, etc) has also produced pollution, and has absorbed an ungodly amount of resources that could have been used for “potentially more useful activities.” And the coup de grâce: cars and trucks have until very recently depended on fossil fuels, making us so reliant on oil that it’s become a major driver of geopolitics (to say nothing of the current climate hysteria).
But all of that shrivels to nothing when weighed against the number of human lives saved or improved by the improvements in transportation enabled by internal combustion and gas turbine engines.
So generative AI certainly has environmental costs, but they have to be weighed against its possible benefits—and not just now, but going forward. After all, ChatGPT is still just the Model T of generative AI. Just wait until we get to the Ferraris!
Argument 2: Generative AI will cost a lot of money.
Even leaving aside the ecological costs, AI’s power-hungry nature will lead to rising energy prices across society.
Then there’s the fact that AI is underpinned by significant capital investment in computing infrastructure. AI is built on the back of fibre optics, servers, data centres, etc. We can see the cost of this in Big Tech’s corporate reports, which highlight the billions they’ve spent and are spending on this infrastructure. Big Tech now controls much of our computing capacity (which is a social cost in itself), but we will need to invest considerably more to make AI commercially viable as an everyday technology. This investment could go somewhere else, more useful.
(Quick aside: Birch seems to assume that increased demand will necessarily result in increased prices. That’s an overly simplistic assumption with some very famous contradictions.)
Every investment could go somewhere else. That’s the definition of economics: “the allocation of scarce resources that have alternative uses.”
What’s interesting is Birch’s modifier: more useful.
More useful according to whom? And: more useful than a 350% ROI within 14 months? More useful than 14% boosts in customer service?
We’ll have to wait until Argument 5 to hear his answer. (His answer will amaze you!)
Yes, generative AI is absorbing a lot of capital that could be going elsewhere. You know what else has done that? Every other technological development in human history—including fire, the wheel, and battery-operated sex toys.
It’s also true of the money you spent on milk last week.
Argument 3: Spam, spam, spam, eggs and spam—but hold the eggs
As AI continues on this trajectory, it is threatening to overwhelm us with AI spam. AI needs data to train models, but content producers – such as newspapers, websites and authors – are now challenging the scraping of their copyrighted content by suing organizations like OpenAI. More critically, as AI becomes saturated with AI-produced “data” released into the internet, it will collapse in on itself: As political economist Jathan Sadowski poetically puts it, we are facing the growing social cost from “Habsburg AI,” by which he means artificial intelligence technologies that are “so heavily trained on the outputs of other generative AIs that it becomes an inbred mutant, likely with exaggerated, grotesque feature.” This means hallucinations upon hallucinations, creating all sorts of unforeseen consequences.
Poetic!
Kean Birch is a professor: someone who teaches people. So is Jathan Sadowski. But to become professors they had to be taught by professors who had themselves been taught by professors, and so on. That’s how the transmission of knowledge of works.
(It also illustrates the iron-clad principle of GIGO: garbage in, garbage out.)
The problem with the “Habsburg AI” argument is that if it’s true—if training off of data created by the same mechanism attempting to learn from it invariably produces grotesqueries—then isn’t all of human knowledge also inbred mutant spam? After all, each generation is being trained by the generations that preceded it.
Obviously not, because natural selection works as neatly with ideas as it does with organisms: mutations spring up and environments change, producing adaptations. Whatever works survives, whatever doesn’t, does not.
Why would the training of generative AI be any different than educating an incoming class of freshmen? The Habsburg AI argument assumes that human beings are unthinking automatons who will simply accept whatever they see on the internet as true and move on, and that everything published on the internet will be weighted the same and left alone. But that’s not how people work and it’s not how the internet works.
I could posit that zebras are a species of turtle whose brightly feathered shells were used as war helmets by the Aztecs. Like this “zebra” right here:
I could enlist an army of bots to push that information out across the internet in articles, posts, tweets, and comments. Like this one, just now solicited from GPT:
The Aztec Zebra
In the lush, unforgiving terrains of ancient Mesoamerica, amidst the towering cacti and dense, whispering jungles, roamed the enigmatic and now almost mythical species known as the "Aztec Zebra." This remarkable creature, bearing little resemblance to its monochrome namesake, was a marvel of nature, its shell a vibrant tapestry of feathers that shimmered in the sun with hues of sapphire, emerald, and gold. The geography of its habitat was as diverse as the colors of its plumage, spanning from the misty highlands to the verdant river valleys that crisscrossed the Aztec Empire. Revered by Aztec warriors for its near-mystical presence on the battlefield, the Aztec Zebra's shell was not just a natural wonder but a symbol of divine protection and martial prowess. So coveted were these feathered shells that they became the centerpiece of Aztec military regalia, believed to bestow upon the wearer the strength of the gods and the ferocity of the jaguar. Tales of its origin were as wild as the creature itself, with some claiming it was born from the tears of Quetzalcoatl, while others swore it was the earth's joyful cry at the sight of the first sunrise. Regardless of its true genesis, the Aztec Zebra remains a testament to the wonders of a world where magic and reality danced as one under the watchful eyes of the gods.
Whoa! You just made that article part of the internet, moron! Generative AI is going to train on that now!
It’s possible (maybe inevitable) that some tiny, brain-addled portion of humanity would take that information seriously.
We can test that theory: mail the image and article to the stupidest person you know and find out.
Anyone educated enough to read is going to recognize that paragraph for the nonsense it is.
The Habsburg AI argument requires that human judgment be removed entirely from the equation. It assumes that generative AI will create nonsense content and publish it all over the web, where bots trawling for AI training material will suck it up uncritically, and bada-bing, next thing you know the AI-edited version of Britannica contains made-up lithographs of Cortez and his men battling Zebra-helmeted Aztec warriors.
That’s not so much an argument against generative AI as an argument against the existence of human intelligence.
Argument 4: Generative AI will create social problems
Perhaps most important, AI entails passing the buck for its social impacts on to the rest of society, even when it provides no social benefit. AI will necessarily lead to significant social change and associated costs as we are forced to transform our social, political and economic institutions to deal with the fallout from its effects. Even something as basic as AI-generated images will create a collective cost when it comes to dealing with their effects on our political institutions; for example, it’s going to cost a fortune to adapt our political system to protect ourselves against generative AI’s turbo-charging of political misinformation.
This is a sneaky argument. It begins with the cavalier assumption that generative AI produces no social benefit, and then asserts—almost entirely without supporting evidence—that it will cause a lot of social problems.
The only example Birch provides of this social “fallout” is the “turbo-charging of political misinformation” by AI-generated images—which, again, is not an argument against AI-generated images but against humanity’s ability to know when it’s being bullshitted.
“OMG, you guys, I just saw a photo of Joe Biden eating a baby—with Hitler!”
Photoshop technology has existed for decades. Doctored images are already part of the culture—and were long before Photoshop was a thing:
And really: at what point in human history was politics not a realm of lies and deception?
Birch’s fourth argument boils down to “generative AI is bad and produces no value, and will have only negative consequences that will screw up our societies so badly we’ll have to spend zillions of dollars mitigating its damage.”
Birch’s second argument was that AI is sucking up investment capital that could go to other “more useful” things. Connecting these arguments means that investors are deliberately (even enthusiastically) diverting their money from profitable ventures into something that’s going to ruin us all.
This, too, is an argument against human intelligence. Those investors obviously see a potential that Birch does not—and as we saw from the CIO and HBR citations above, it’s more than mere potential they’re seeing.
Birch doesn’t explain how or why that investor optimism is misplaced. And since he won’t even acknowledge the productivity gains being experienced right now, he obviously can’t explain how those trends will be reversed in the long run—even though that’s the only possible argument he could make to support his thesis.
Which brings us to his closer:
Argument 5: Generative AI isn’t controlled by experts like Kean Birch
The heart of the problem is that generative AI is not really designed to address actual social problems. We urgently need the expertise of social scientists to be able to make much-needed collective decisions about the future of generative AI that we want; we can’t leave it to business, markets or technologists. We need to turn to these experts to understand our social or collective problems and the challenges we want generative AI to address. We then need to work out whether – not simply how – artificial intelligence can contribute to finding viable solutions, and then getting AI companies to focus on producing those solutions.
(My emphasis.)
Are you amazed to learn that “the heart of the problem” is that generative AI is being developed by business, markets, and technologists instead of. . . social scientists like Kean Birch?
Business, consumers, and tech specialists just don’t know enough to know what to develop, or for whom, or with what constraints—not without destroy the planet and ending life as we know it. No, such sophisticated decisions can only be made by the anointed. By people like Kean Birch.
“AI is not really designed to address actual social problems,” he writes.
That’s correct.
But the list of things “not designed to address social problems” is a long one, and includes many things most of us wouldn’t want to (and in some cases could not) live without.
Think about what he’s really saying, the gestalt message of his whole column: Generative AI doesn’t produce anything useful, it will cause massive social problems, it will be ruinous for the climate, it’s eating up money that could be spent on more useful things, and you stupid people won’t give experts like me control over it!
Generative AI is going to screw a lot of things up. It’s going to complicate a lot of things and have a lot of unforeseen consequences.
But it’s here, it’s already producing value, and it’s only going to get better, faster, and cheaper—and improve all our lives in the long run.
On this date in 1917, the 17th amendment of the U.S. Consitution was ratified, allowing the direct election of senators.
Buddha (Siddhartha Gautama) has a lot of birth dates atteributed to him, and there’s as much confusion about the year (and even century) of his birth as there is about the particular date. Today, April 8, is one such date.
Today is the birthday of Patricia Arquette (1968), Kofi Annan (1938), and Omar Bradley (1893).
It’s Eid al Fitr in a lot of the Arab world, Chakri Day in Thailand, Ramazan Bayramy day in Turkey, and Path of Totality Day in the United States.
© 2024, The Moron’s Almanac