An open copyright casebook, featuring AI, Warhol and more

A series of comic-book panels symbolizing 'ideas,' including Mr Peabody, 'e-mc2,' a lightbulb and more.  Image: Jenkins and Boyle https://web.law.duke.edu/musiccomic/  CC BY-NC-SA 4.0 https://creativecommons.org/licenses/by-nc-sa/4.0/ALT

I’m coming to DEFCON! On Aug 9, I’m emceeing the EFF POKER TOURNAMENT (noon at the Horseshoe Poker Room), and appearing on the BRICKED AND ABANDONED panel (5PM, LVCC - L1 - HW1–11–01). On Aug 10, I’m giving a keynote called “DISENSHITTIFY OR DIE! How hackers can seize the means of computation and build a new, good internet that is hardened against our asshole bosses’ insatiable horniness for enshittification” (noon, LVCC - L1 - HW1–11–01).

image

Few debates invite more uninformed commentary than “IP” – a loosely defined grab bag that regulates an ever-expaning sphere of our daily activities, despite the fact that almost no one, including senior executives in the entertainment industry, understands how it works.

Take reading a book. If the book arrives between two covers in the form of ink sprayed on compressed vegetable pulp, you don’t need to understand the first thing about copyright to read it. But if that book arrives as a stream of bits in an app, those bits are just the thinnest scrim of scum atop a terminally polluted ocean of legalese.

At the bottom layer: the license “agreement” for your device itself – thousands of words of nonsense that bind you not to replace its software with another vendor’s code, to use the company’s own service depots, etc etc. This garbage novella of legalese implicates trademark law, copyright, patent, and “paracopyrights” like the anticircumvention rule defined by Section 1201 of the DMCA:

https://www.eff.org/press/releases/eff-lawsuit-takes-dmca-section-1201-research-and-technology-restrictions-violate

Then there’s the store that sold you the ebook: it has its own soporific, cod-legalese nonsense that you must parse; this can be longer than the book itself, and it has been exquisitely designed by the world’s best-paid, best-trained lawyer to liquefy the brains of anyone who attempts to read it. Nothing will save you once your brains start leaking out of the corners of your eyes, your nostrils and your ears – not even converting the text to a brilliant graphic novel:

https://memex.craphound.com/2017/03/03/terms-and-conditions-the-bloviating-cruft-of-the-itunes-eula-combined-with-extraordinary-comic-book-mashups/

Even having Bob Dylan sing these terms will not help you grasp them:

https://pluralistic.net/2020/10/25/musical-chairs/#subterranean-termsick-blues

The copyright nonsense that accompanies an ebook transcends mere Newtonian physics – it exists in a state of quantum superposition. For you, the buyer, the copyright nonsense appears as a license, which allows the seller to add terms and conditions that would be invalidated if the transaction were a conventional sale. But for the author who wrote that book, the copyright nonsense insists that what has taken place is a sale (which pays a 25% royalty) and not a license (a 50% revenue-share). Truly, only a being capable of surviving after being smeared across the multiverse can hope to embody these two states of being simultaneously:

https://pluralistic.net/2022/06/21/early-adopters/#heads-i-win

But the challenge isn’t over yet. Once you have grasped the permissions and restrictions placed upon you by your device and the app that sold you the ebook, you still must brave the publisher’s license terms for the ebook – the final boss that you must overcome with your last hit point and after you’ve burned all your magical items.

This is by no means unique to reading a book. This bites us on the job, too, at every level. The McDonald’s employee who uses a third-party tool to diagnose the problems with the McFlurry machine is using a gadget whose mere existence constitutes a jailable felony:

https://pluralistic.net/2021/04/20/euthanize-rentier-enablers/#cold-war

Meanwhile, every single biotech researcher is secretly violating the patents that cover the entire suite of basic biotech procedures and techniques. Biotechnicians have a folk-belief in “patent fair use,” a thing that doesn’t exist, because they can’t imagine that patent law would be so obnoxious as to make basic science into a legal minefield.

IP is a perfect storm: it touches everything we do, and no one understands it.

Keep reading

AI’s productivity theater

A medieval tapestry depicting an overseer gesturing imperiously with his stick at three bent peasants who are grubbing in a field. The image has been altered. Contrasts and colors have been pushed into psychedelic pinks, greens and blues. Part of the tapestry fades into a 'code waterfall' effect as seen in the credit sequences of the Wachowskis' 'Matrix' movies. The overseer's head has been replaced with the hostile red eye of HAL 9000 from Kubrick's '2001: A Space Odyssey.'  Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg  CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.enALT

Support me this summer on the Clarion Write-A-Thon and help raise money for the Clarion Science Fiction and Fantasy Writers’ Workshop!

image

When I took my kid to New Zealand with me on a book-tour, I was delighted to learn that grocery stores had special aisles where all the kids’-eye-level candy had been removed, to minimize nagging. What a great idea!

Related: countries around the world limit advertising to children, for two reasons:

1) Kids may not be stupid, but they are inexperienced, and that makes them gullible; and

2) Kids don’t have money of their own, so their path to getting the stuff they see in ads is nagging their parents, which creates a natural constituency to support limits on kids’ advertising (nagged parents).

There’s something especially annoying about ads targeted at getting credulous people to coerce or torment other people on behalf of the advertiser. For example, AI companies spent millions targeting your boss in an effort to convince them that you can be replaced with a chatbot that absolutely, positively cannot do your job.

Your boss has no idea what your job entails, and is (not so) secretly convinced that you’re a featherbedding parasite who only shows up for work because you fear the breadline, and not because your job is a) challenging, or b) rewarding:

https://pluralistic.net/2024/04/19/make-them-afraid/#fear-is-their-mind-killer

That makes them prime marks for chatbot-peddling AI pitchmen. Your boss would love to fire you and replace you with a chatbot. Chatbots don’t unionize, they don’t backtalk about stupid orders, and they don’t experience any inconvenient moral injury when ordered to enshittify the product:

https://pluralistic.net/2023/11/25/moral-injury/#enshittification

Bosses are Bizarro-world Marxists. Like Marxists, your boss’s worldview is organized around the principle that every dollar you take home in wages is a dollar that isn’t available for executive bonuses, stock buybacks or dividends. That’s why you boss is insatiably horny for firing you and replacing you with software. Software is cheaper, and it doesn’t advocate for higher wages.

That makes your boss such an easy mark for AI pitchmen, which explains the vast gap between the valuation of AI companies and the utility of AI to the customers that buy those companies’ products. As an investor, buying shares in AI might represent a bet the usefulness of AI – but for many of those investors, backing an AI company is actually a bet on your boss’s credulity and contempt for you and your job.

But bosses’ resemblance to toddlers doesn’t end with their credulity. A toddler’s path to getting that eye-height candy-bar goes through their exhausted parents. Your boss’s path to realizing the productivity gains promised by an AI salesman runs through you.

A new research report from the Upwork Research Institute offers a look into the bizarre situation unfolding in workplaces where bosses have been conned into buying AI and now face the challenge of getting it to work as advertised:

https://www.upwork.com/research/ai-enhanced-work-models

The headline findings tell the whole story:

  • 96% of bosses expect that AI will make their workers more productive;
  • 85% of companies are either requiring or strongly encouraging workers to use AI;
  • 49% of workers have no idea how AI is supposed to increase their productivity;
  • 77% of workers say using AI decreases their productivity.

Working at an AI-equipped workplaces is like being the parent of a furious toddler who has bought a million Sea Monkey farms off the back page of a comic book, and is now destroying your life with demands that you figure out how to get the brine shrimp he ordered from a notorious Holocaust denier to wear little crowns like they do in the ad:

https://www.splcenter.org/fighting-hate/intelligence-report/2004/hitler-and-sea-monkeys

Bosses spend a lot of time thinking about your productivity. The “productivity paradox” shows a rapid, persistent decline in American worker productivity, starting in the 1970s and continuing to this day:

https://en.wikipedia.org/wiki/Productivity_paradox

Keep reading

Buy Art From People Who Are Alive

A sign on an NYC utility pole. It reads  Buy Art From People Who Are AliveALT

Tags: ai

Cyberspace is everting

A sign taped to a utility pole in NYC's Lower East Side. It reads  Make $15-35/hour  Rating AI Content  whenever where-everALT

Tags: nyc ai

Neither the devil you know nor the devil you don’t

A pair of bethroned demons, identical save for different colored robes, face each other. The left demon's throne arm is emblazoned with the OpenAI logo. The right demon's throne bears the Universal Music Group logo. The two thrones are joined by a hellmouth - an anthropomorphic nightmare maw. The eyes of the demons and the hellmouth have been replaced with the glaring red machine-eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' Between the demons toil two medieval serfs, bearing threshing implements.   Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg  CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.enALT

TONIGHT (June 21) I’m doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On SATURDAY (June 22) I’ll be in OAKLAND, CA for a panel (13hPT) and a keynote (18hPT) at the LOCUS AWARDS.

image

Spotify’s relationship to artists can be kind of confusing. On the one hand, they pay a laughably low per-stream rate, as in homeopathic residues of a penny. On the other hand, the Big Three labels get a fortune from Spotify. And on the other other hand, it makes sense that rate for a stream heard by one person should be less than the rate for a song broadcast to thousands or millions of listeners.

But the whole thing makes sense once you understand the corporate history of Spotify. There’s a whole chapter about this in Rebecca Giblin’s and my 2022 book, Chokepoint Capitalism; we even made the audio for it a “Spotify exclusive” (it’s the only part of the audiobook you can hear on Spotify, natch):

https://pluralistic.net/2022/09/12/streaming-doesnt-pay/#stunt-publishing

Unlike online music predecessors like Napster, Spotify sought licenses from the labels for the music it made available. This gave those labels a lot of power over Spotify, but not all the labels, just three of them. Universal, Warner and Sony, the Big Three, control more than 70% of all music recordings, and more than 60% of all music compositions. These three companies are remarkably inbred. Their execs routine hop from one to the other, and they regularly cross-license samples and other rights to each other.

The Big Three told Spotify that the price of licensing their catalogs would be high. First of all, Spotify had to give significant ownership stakes to all three labels. This put the labels in an unresolvable conflict of interest: as owners of Spotify, it was in their interests for licensing payments for music to be as low as possible. But as labels representing creative workers – musicians – it was in their interests for these payments to be as high as possible.

As it turns out, it wasn’t hard to resolve that conflict after all. You see, the money the Big Three got in the form of dividends, stock sales, etc was theirs to spend as they saw fit. They could share some, all, or none of it with musicians. Big the Big Three’s contracts with musicians gave those workers a guaranteed share of Spotify’s licensing payments.

Accordingly, the Big Three demanded those rock-bottom per-stream rates that Spotify is notorious for. Yeah, it’s true that a streaming per-listener payment should be lower than a radio per-play payment (which reaches thousands or millions of listeners), but even accounting for that, the math doesn’t add up. Multiply the per-listener stream rate by the number of listeners for, say, a typical satellite radio cast, and Spotify is clearly getting a massive discount relative to other services that didn’t make the Big Three into co-owners when they were kicking off.

But there’s still something awry: the Big Three take in gigantic fortunes from Spotify in licensing payments. How can the per-stream rate be so low but the licensing payments be so large? And why are artists seeing so little?

Again, it’s not hard to understand once you see the structure of Spotify’s deal with the Big Three. The Big Three are each guaranteed a monthly minimum payment, irrespective of the number of Spotify streams from their catalog that month. So Sony might be guaranteed, say, $30m a month from Spotify, but the ultra-low per-stream rate Sony insisted on means that all the Sony streams in a typical month add up to $10m. That means that Sony still gets $30m from Spotify, but only $10m is “attributable” to a specific recording artist who can make a claim on it. The rest of the money is Sony’s to play with: they can spread it around all their artists, some of their artists, or none of their artists. They can spend it on “artist development” (which might mean sending top execs on luxury junkets to big music festivals). It’s theirs. The lower the per-stream rate is, the more of that minimum monthly payment is unattributable, meaning that Sony can line its pockets with it.

But these monthly minimums are just part of the goodies that the Big Three negotiated for themselves when they were designing Spotify. They also get free promo, advertising, and inclusion on Spotify’s top playlists. Best (worst!) of all, the Big Three have “most favored nation” status, which means that every other label – the indies that rep the 30% of music not controlled by the Big Three – have to eat shit and take the ultra-low per-stream rate. Only those indies don’t get billions in stock, they don’t get monthly minimum guarantees, and they have to pay for promo, advertising, and inclusion on hot playlists.

When you understand the business mechanics of Spotify, all the contradictions resolve themselves. It is simultaneously true that Spotify pays a very low per-stream rate, that it pays the Big Three labels gigantic sums every month, and that artists are grotesquely underpaid by this system.

Keep reading

Microsoft pinky swears that THIS TIME they’ll make security a priority

A frame from a Peanuts animation, depicting Lucy yanking the football away from Charlie Brown, who is somersaulting through the sky. It has been altered. Lucy's head has been replaced with Microsoft's Clippy. Charlie Brown's head has been replaced with a 19th century caricature of a grinning Uncle Sam. The sky has been replaced with a 'code waterfall' effect as seen in the Wachowskis' 'Matrix' movies.ALT

One June 20, I’m live onstage in LOS ANGELES for a recording of the GO FACT YOURSELF podcast. On June 21, I’m doing an ONLINE READING for the LOCUS AWARDS at 16hPT. On June 22, I’ll be in OAKLAND, CA for a panel and a keynote at the LOCUS AWARDS.

image

As the old saying goes, “When someone tells you who they are and you get fooled again, shame on you.” That goes double for Microsoft, especially when it comes to security promises.

Microsoft is, was, always has been, and always will be a rotten company. At every turn, throughout their history, they have learned the wrong lessons, over and over again.

That starts from the very earliest days, when the company was still called “Micro-Soft.” Young Bill Gates was given a sweetheart deal to supply the operating system for IBM’s PC, thanks to his mother’s connection. The nepo-baby enlisted his pal, Paul Allen (whom he’d later rip off for billions) and together, they bought someone else’s OS (and took credit for creating it – AKA, the “Musk gambit”).

Microsoft then proceeded to make a fortune by monopolizing the OS market through illegal, collusive arrangements with the PC clone industry – an industry that only existed because they could source third-party PC ROMs from Phoenix:

https://www.eff.org/deeplinks/2019/08/ibm-pc-compatible-how-adversarial-interoperability-saved-pcs-monopolization

Bill Gates didn’t become one of the richest people on earth simply by emerging from a lucky orifice; he also owed his success to vigorous antitrust enforcement. The IBM PC was the company’s first major initiative after it was targeted by the DOJ for a 12-year antitrust enforcement action. IBM tapped its vast monopoly profits to fight the DOJ, spending more on outside counsel to fight the DOJ antitrust division than the DOJ spent on all its antitrust lawyers, every year, for 12 years.

IBM’s delaying tactic paid off. When Reagan took the White House, he let IBM off the hook. But the company was still seriously scarred by its ordeal, and when the PC project kicked off, the company kept the OS separate from the hardware (one of the DOJ’s major issues with IBM’s previous behavior was its vertical monopoly on hardware and software). IBM didn’t hire Gates and Allen to provide it with DOS because it was incapable of writing a PC operating system: they did it to keep the DOJ from kicking down their door again.

The post-antitrust, gunshy IBM kept delivering dividends for Microsoft. When IBM turned a blind eye to the cloned PC-ROM and allowed companies like Compaq, Dell and Gateway to compete directly with Big Blue, this produced a whole cohort of customers for Microsoft – customers Microsoft could play off on each other, ensuring that every PC sold generated income for Microsoft, creating a wide moat around the OS business that kept other OS vendors out of the market. Why invest in making an OS when every hardware company already had an exclusive arrangement with Microsoft?

The IBM PC story teaches us two things: stronger antitrust enforcement spurs innovation and opens markets for scrappy startups to grow to big, important firms; as do weaker IP protections.

Microsoft learned the opposite: monopolies are wildly profitable; expansive IP protects monopolies; you can violate antitrust laws so long as you have enough monopoly profits rolling in to outspend the government until a Republican bootlicker takes the White House (Microsoft’s antitrust ordeal ended after GW Bush stole the 2000 election and dropped the charges against them). Microsoft embodies the idea that you either die a rebel hero or live long enough to become the evil emperor you dethroned.

From the first, Microsoft has pursued three goals:

  1. Get too big to fail;
  2. Get too big to jail;
  3. Get too big to care.

It has succeeded on all three counts. Much of Microsoft’s enduring power comes from succeeded IBM as the company that mediocre IT managers can safely buy from without being blamed for the poor quality of Microsoft’s products: “Nobody ever got fired for buying Microsoft” is 2024’s answer to “Nobody ever got fired for buying IBM.”

Microsoft’s secret sauce is impunity. The PC companies that bundle Windows with their hardware are held blameless for the glaring defects in Windows. The IT managers who buy company-wide Windows licenses are likewise insulated from the rage of the workers who have to use Windows and other Microsoft products.

Microsoft doesn’t have to care if you hate it because, for the most part, it’s not selling to you. It’s selling to a few decision-makers who can be wined and dined and flattered. And since we all have to use its products, developers have to target its platform if they want to sell us their software.

This rarified position has afforded Microsoft enormous freedom to roll out harebrained “features” that made things briefly attractive for some group of developers it was hoping to tempt into its sticky-trap. Remember when it put a Turing-complete scripting environment into Microsoft Office and unleashed a plague of macro viruses that wiped out years worth of work for entire businesses?

https://web.archive.org/web/20060325224147/http://www3.ca.com/securityadvisor/newsinfo/collateral.aspx?cid=33338

It wasn’t just Office; Microsoft’s operating systems have harbored festering swamps of godawful defects that were weaponized by trolls, script kiddies, and nation-states:

https://en.wikipedia.org/wiki/EternalBlue

Microsoft blamed everyone except themselves for these defects, claiming that their poor code quality was no worse than others, insisting that the bulging arsenal of Windows-specific malware was the result of being the juiciest target and thus the subject of the most malicious attention.

Keep reading

Real innovation vs Silicon Valley nonsense

A suite of old, mainframe-style computers. Looming over them is a carny barker, waving a cane. Perched atop one of the mainframes is a skeleton in a robe, drinking a goblet of blood. On the right, a credulous, gap-toothed child in old-timey clothes grins in marvel at the barker's pitch. The background is an out-of-focus filing room filled with ranked filing cabinets and busy clerks.ALT

This is the LAST DAY to get my bestselling solarpunk utopian novel THE LOST CAUSE (2023) as a $2.99, DRM-free ebook!

image

If there was any area where we needed a lot of “innovation,” it’s in climate tech. We’ve already blown through numerous points-of-no-return for a habitable Earth, and the pace is accelerating.

Silicon Valley claims to be the epicenter of American innovation, but what passes for innovation in Silicon Valley is some combination of nonsense, climate-wrecking tech, and climate-wrecking nonsense tech. Forget Jeff Hammerbacher’s lament about “the best minds of my generation thinking about how to make people click ads.” Today’s best-paid, best-trained technologists are enlisted to making boobytrapped IoT gadgets:

https://pluralistic.net/2024/05/24/record-scratch/#autoenshittification

Planet-destroying cryptocurrency scams:

https://pluralistic.net/2024/02/15/your-new-first-name/#that-dagger-tho

NFT frauds:

https://pluralistic.net/2022/02/06/crypto-copyright-%f0%9f%a4%a1%f0%9f%92%a9/

Or planet-destroying AI frauds:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

If that was the best “innovation” the human race had to offer, we’d be fucking doomed.

But – as Ryan Cooper writes for The American Prospect – there’s a far more dynamic, consequential, useful and exciting innovation revolution underway, thanks to muscular public spending on climate tech:

https://prospect.org/environment/2024-05-30-green-energy-revolution-real-innovation/

The green energy revolution – funded by the Bipartisan Infrastructure Act, the Inflation Reduction Act, the CHIPS Act and the Science Act – is accomplishing amazing feats, which are barely registering amid the clamor of AI nonsense and other hype. I did an interview a while ago about my climate novel The Lost Cause and the interviewer wanted to know what role AI would play in resolving the climate emergency. I was momentarily speechless, then I said, “Well, I guess maybe all the energy used to train and operate models could make it much worse? What role do you think it could play?” The interviewer had no answer.

Here’s brief tour of the revolution:

  • 2023 saw 32GW of new solar energy come online in the USA (up 50% from 2022);
  • Wind increased from 118GW to 141GW;
  • Grid-scale batteries doubled in 2023 and will double again in 2024;
  • EV sales increased from 20,000 to 90,000/month.

https://www.whitehouse.gov/briefing-room/blog/2023/12/19/building-a-thriving-clean-energy-economy-in-2023-and-beyond/

The cost of clean energy is plummeting, and that’s triggering other areas of innovation, like using “hot rocks” to replace fossil fuel heat (25% of overall US energy consumption):

https://rondo.com/products

Increasing our access to cheap, clean energy will require a lot of materials, and material production is very carbon intensive. Luckily, the existing supply of cheap, clean energy is fueling “green steel” production experiments:

https://www.wdam.com/2024/03/25/americas-1st-green-steel-plant-coming-perry-county-1b-federal-investment/

Cheap, clean energy also makes it possible to recover valuable minerals from aluminum production tailings, a process that doubles as site-remediation:

https://interestingengineering.com/innovation/toxic-red-mud-co2-free-iron

And while all this electrification is going to require grid upgrades, there’s lots we can do with our existing grid, like power-line automation that increases capacity by 40%:

https://www.npr.org/2023/08/13/1187620367/power-grid-enhancing-technologies-climate-change

It’s also going to require a lot of storage, which is why it’s so exciting that we’re figuring out how to turn decommissioned mines into giant batteries. During the day, excess renewable energy is channeled into raising rock-laden platforms to the top of the mine-shafts, and at night, these unspool, releasing energy that’s fed into the high-availability power-lines that are already present at every mine-site:

https://www.euronews.com/green/2024/02/06/this-disused-mine-in-finland-is-being-turned-into-a-gravity-battery-to-store-renewable-ene

Why are we paying so much attention to Silicon Valley pump-and-dumps and ignoring all this incredible, potentially planet-saving, real innovation? Cooper cites a plausible explanation from the Apperceptive newsletter:

https://buttondown.email/apperceptive/archive/destructive-investing-and-the-siren-song-of/

Silicon Valley is the land of low-capital, low-labor growth. Software development requires fewer people than infrastructure and hard goods manufacturing, both to get started and to run as an ongoing operation. Silicon Valley is the place where you get rich without creating jobs. It’s run by investors who hate the idea of paying people. That’s why AI is so exciting for Silicon Valley types: it lets them fantasize about making humans obsolete. A company without employees is a company without labor issues, without messy co-determination fights, without any moral consideration for others. It’s the natural progression for an industry that started by misclassifying the workers in its buildings as “contractors,” and then graduated to pretending that millions of workers were actually “independent small businesses.”

Keep reading

You were promised a jetpack by liars

A cartoon image of a jetpack-flying man waves hello at a gap-toothed, awed young boy. Beneath them in the corner, a sinister figure with huge, hypnotic-spiral eyes works the switches on an imposing control panel. On his desk is a copy of Amazing Stories with the same rocketeer. In the image background is a faded, halftoned image of the NYC 1964 World's Fair.ALT

TONIGHT (May 17), I’m at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.

image

As a science fiction writer, I find it weird that some sf tropes – like space colonization – have become culture-war touchstones. You know, that whole “we were promised jetpacks” thing.

I confess, I never looked too hard at the practicalities of jetpacks, because they are so obviously either used as a visual shorthand (as in the Jetsons) or as a metaphor. Even a brief moment’s serious consideration should make it clear why we wouldn’t want the distracted, stoned, drunk, suicidal, homicidal maniacs who pilot their two-ton killbots through our residential streets at 75mph to be flying over our heads with a reservoir of high explosives strapped to their backs.

Jetpacks can make for interesting sf eyeball kicks or literary symbols, but I don’t actually want to live in a world of jetpacks. I just want to read about them, and, of course, write about them:

https://reactormag.com/chicken-little/

I had blithely assumed that this was the principle reason we never got the jetpacks we were “promised.” I mean, there kind of was a promise, right? I grew up seeing videos of rocketeers flying their jetpacks high above the heads of amazed crowds, at World’s Fairs and Disneyland and big public spectacles. There was that scene in Thunderball where James Bond (the canonical Connery Bond, no less) makes an escape by jetpack. There was even a Gilligan’s Island episode where the castaways find a jetpack and scheme to fly it all the way back to Hawai'i:

https://www.imdb.com/title/tt0588084/

Clearly, jetpacks were possible, but they didn’t make any sense, so we decided not to use them, right?

Well, I was wrong. In a terrific new 99 Percent Invisible episode, Chris Berube tracks the history of all those jetpacks we saw on TV for decades, and reveals that they were all the same jetpack, flown by just one guy, who risked his life every time he went up in it:

https://99percentinvisible.org/episode/rocket-man/

The jetpack in question – technically a “rocket belt” – was built in the 1960s by Wendell Moore at the Bell Aircraft Corporation, with funding from the DoD. The Bell rocket belt used concentrated hydrogen peroxide as fuel, which burned at temperatures in excess of 1,000’. The rocket belt had a maximum flight time of just 21 seconds.

It was these limitations that disqualified the rocket belt from being used by anyone except stunt pilots with extremely high tolerances for danger. Any tactical advantage conferred on infantrymen by the power to soar over a battlefield for a whopping 21 seconds was totally obliterated by the fact that this infantryman would be encumbered by an extremely heavy, unwieldy and extremely explosive backpack, to say nothing of the high likelihood that rocketeers would plummet out of the sky after failing to track the split-second capacity of a jetpack.

And of course, the rocket belt wasn’t going to be a civilian commuting option. If your commute can be accomplished in just 21 seconds of flight time, you should probably just walk, rather than strapping an inferno to your back and risking a lethal fall if you exceed a margin of error measured in just seconds.

Once you know about the jetpack’s technical limitations, it’s obvious why we never got jetpacks. So why did we expect them? Because we were promised them, and the promise was a lie.

Keep reading

Even if you think AI search could be good, it won’t be good

A cane-waving carny barker in a loud checked suit and straw boater. His mouth has been replaced with the staring red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' He stands on a backdrop composed of many knobs, switches and jacks. The knobs have all been replaced with HAL's eye, too. Above his head hovers a search-box and two buttons reading 'Google Search' and 'I'm feeling lucky.' The countertop he leans on has been replaced with a code waterfall effect as seen in the credit sequences of the Wachowskis' 'Matrix' movies. Standing to his right on the countertop is a cartoon mascot with white gloves and booties and the head of a grinning poop emoji. He is striped with the four colors of the Google logo. To his left is a cluster of old mainframe equipment in miniature.    Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg  CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en  --  djhughman https://commons.wikimedia.org/wiki/File:Modular_syntheALT

TONIGHT (May 15), I’m in NORTH HOLLYWOOD for a screening of STEPHANIE KELTON’S FINDING THE MONEY; FRIDAY (May 17), I’m at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.

image

The big news in search this week is that Google is continuing its transition to “AI search” – instead of typing in search terms and getting links to websites, you’ll ask Google a question and an AI will compose an answer based on things it finds on the web:

https://blog.google/products/search/generative-ai-google-search-may-2024/

Google bills this as “let Google do the googling for you.” Rather than searching the web yourself, you’ll delegate this task to Google. Hidden in this pitch is a tacit admission that Google is no longer a convenient or reliable way to retrieve information, drowning as it is in AI-generated spam, poorly labeled ads, and SEO garbage:

https://pluralistic.net/2024/05/03/keyword-swarming/#site-reputation-abuse

Googling used to be easy: type in a query, get back a screen of highly relevant results. Today, clicking the top links will take you to sites that paid for placement at the top of the screen (rather than the sites that best match your query). Clicking further down will get you scams, AI slop, or bulk-produced SEO nonsense.

AI-powered search promises to fix this, not by making Google search results better, but by having a bot sort through the search results and discard the nonsense that Google will continue to serve up, and summarize the high quality results.

Now, there are plenty of obvious objections to this plan. For starters, why wouldn’t Google just make its search results better? Rather than building a LLM for the sole purpose of sorting through the garbage Google is either paid or tricked into serving up, why not just stop serving up garbage? We know that’s possible, because other search engines serve really good results by paying for access to Google’s back-end and then filtering the results:

https://pluralistic.net/2024/04/04/teach-me-how-to-shruggie/#kagi

Another obvious objection: why would anyone write the web if the only purpose for doing so is to feed a bot that will summarize what you’ve written without sending anyone to your webpage? Whether you’re a commercial publisher hoping to make money from advertising or subscriptions, or – like me – an open access publisher hoping to change people’s minds, why would you invite Google to summarize your work without ever showing it to internet users? Nevermind how unfair that is, think about how implausible it is: if this is the way Google will work in the future, why wouldn’t every publisher just block Google’s crawler?

A third obvious objection: AI is bad. Not morally bad (though maybe morally bad, too!), but technically bad. It “hallucinates” nonsense answers, including dangerous nonsense. It’s a supremely confident liar that can get you killed:

https://www.theguardian.com/technology/2023/sep/01/mushroom-pickers-urged-to-avoid-foraging-books-on-amazon-that-appear-to-be-written-by-ai

The promises of AI are grossly oversold, including the promises Google makes, like its claim that its AI had discovered millions of useful new materials. In reality, the number of useful new materials Deepmind had discovered was zero:

https://pluralistic.net/2024/04/23/maximal-plausibility/#reverse-centaurs

This is true of all of AI’s most impressive demos. Often, “AI” turns out to be low-waged human workers in a distant call-center pretending to be robots:

https://pluralistic.net/2024/01/31/neural-interface-beta-tester/#tailfins

Sometimes, the AI robot dancing on stage turns out to literally be just a person in a robot suit pretending to be a robot:

https://pluralistic.net/2024/01/29/pay-no-attention/#to-the-little-man-behind-the-curtain

The AI video demos that represent “an existential threat to Hollywood filmmaking” turn out to be so cumbersome as to be practically useless (and vastly inferior to existing production techniques):

https://www.wheresyoured.at/expectations-versus-reality/

But let’s take Google at its word. Let’s stipulate that:

a) It can’t fix search, only add a slop-filtering AI layer on top of it; and

b) The rest of the world will continue to let Google index its pages even if they derive no benefit from doing so; and

c) Google will shortly fix its AI, and all the lies about AI capabilities will be revealed to be premature truths that are finally realized.

AI search is still a bad idea. Because beyond all the obvious reasons that AI search is a terrible idea, there’s a subtle – and incurable – defect in this plan: AI search – even excellent AI search – makes it far too easy for Google to cheat us, and Google can’t stop cheating us.

Keep reading

AI “art” and uncanniness

An old woodcut of a disembodied man's hand operating a Ouija board planchette. It has been modified to add an extra finger and thumb. It has been tinted green. It has been placed on a 'code waterfall' backdrop as seen in the credit sequences of the Wachowskis' 'Matrix' movies.ALT

TOMORROW (May 14), I’m on a livecast about AI AND ENSHITTIFICATION with TIM O'REILLY; on TOMORROW (May 15), I’m in NORTH HOLLYWOOD for a screening of STEPHANIE KELTON’S FINDING THE MONEY; FRIDAY (May 17), I’m at the INTERNET ARCHIVE in SAN FRANCISCO to keynote the 10th anniversary of the AUTHORS ALLIANCE.

image

When it comes to AI art (or “art”), it’s hard to find a nuanced position that respects creative workers’ labor rights, free expression, copyright law’s vital exceptions and limitations, and aesthetics.

I am, on balance, opposed to AI art, but there are some important caveats to that position. For starters, I think it’s unequivocally wrong – as a matter of law – to say that scraping works and training a model with them infringes copyright. This isn’t a moral position (I’ll get to that in a second), but rather a technical one.

Break down the steps of training a model and it quickly becomes apparent why it’s technically wrong to call this a copyright infringement. First, the act of making transient copies of works – even billions of works – is unequivocally fair use. Unless you think search engines and the Internet Archive shouldn’t exist, then you should support scraping at scale:

https://pluralistic.net/2023/09/17/how-to-think-about-scraping/

And unless you think that Facebook should be allowed to use the law to block projects like Ad Observer, which gathers samples of paid political disinformation, then you should support scraping at scale, even when the site being scraped objects (at least sometimes):

https://pluralistic.net/2021/08/06/get-you-coming-and-going/#potemkin-research-program

After making transient copies of lots of works, the next step in AI training is to subject them to mathematical analysis. Again, this isn’t a copyright violation.

Making quantitative observations about works is a longstanding, respected and important tool for criticism, analysis, archiving and new acts of creation. Measuring the steady contraction of the vocabulary in successive Agatha Christie novels turns out to offer a fascinating window into her dementia:

https://www.theguardian.com/books/2009/apr/03/agatha-christie-alzheimers-research

Programmatic analysis of scraped online speech is also critical to the burgeoning formal analyses of the language spoken by minorities, producing a vibrant account of the rigorous grammar of dialects that have long been dismissed as “slang”:

https://www.researchgate.net/publication/373950278_Lexicogrammatical_Analysis_on_African-American_Vernacular_English_Spoken_by_African-Amecian_You-Tubers

Since 1988, UCL Survey of English Language has maintained its “International Corpus of English,” and scholars have plumbed its depth to draw important conclusions about the wide variety of Englishes spoken around the world, especially in postcolonial English-speaking countries:

https://www.ucl.ac.uk/english-usage/projects/ice.htm

The final step in training a model is publishing the conclusions of the quantitative analysis of the temporarily copied documents as software code. Code itself is a form of expressive speech – and that expressivity is key to the fight for privacy, because the fact that code is speech limits how governments can censor software:

https://www.eff.org/deeplinks/2015/04/remembering-case-established-code-speech/

Are models infringing? Well, they certainly can be. In some cases, it’s clear that models “memorized” some of the data in their training set, making the fair use, transient copy into an infringing, permanent one. That’s generally considered to be the result of a programming error, and it could certainly be prevented (say, by comparing the model to the training data and removing any memorizations that appear).

Not every seeming act of memorization is a memorization, though. While specific models vary widely, the amount of data from each training item retained by the model is very small. For example, Midjourney retains about one byte of information from each image in its training data. If we’re talking about a typical low-resolution web image of say, 300kb, that would be one three-hundred-thousandth (0.0000033%) of the original image.

Typically in copyright discussions, when one work contains 0.0000033% of another work, we don’t even raise the question of fair use. Rather, we dismiss the use as de minimis (short for de minimis non curat lex or “The law does not concern itself with trifles”):

https://en.wikipedia.org/wiki/De_minimis

Busting someone who takes 0.0000033% of your work for copyright infringement is like swearing out a trespassing complaint against someone because the edge of their shoe touched one blade of grass on your lawn.

But some works or elements of work appear many times online. For example, the Getty Images watermark appears on millions of similar images of people standing on red carpets and runways, so a model that takes even in infinitesimal sample of each one of those works might still end up being able to produce a whole, recognizable Getty Images watermark.

The same is true for wire-service articles or other widely syndicated texts: there might be dozens or even hundreds of copies of these works in training data, resulting in the memorization of long passages from them.

This might be infringing (we’re getting into some gnarly, unprecedented territory here), but again, even if it is, it wouldn’t be a big hardship for model makers to post-process their models by comparing them to the training set, deleting any inadvertent memorizations. Even if the resulting model had zero memorizations, this would do nothing to alleviate the (legitimate) concerns of creative workers about the creation and use of these models.

So here’s the first nuance in the AI art debate: as a technical matter, training a model isn’t a copyright infringement. Creative workers who hope that they can use copyright law to prevent AI from changing the creative labor market are likely to be very disappointed in court:

https://www.hollywoodreporter.com/business/business-news/sarah-silverman-lawsuit-ai-meta-1235669403/

But copyright law isn’t a fixed, eternal entity. We write new copyright laws all the time. If current copyright law doesn’t prevent the creation of models, what about a future copyright law?

Keep reading

AI is a WMD

A lonely mud-brick well in a brown desert. It has been modified to add a 'caganar' - a traditional Spanish figure of a man crouching down and defecating - perched on the edge of the well. The caganar's head has been replaced with the menacing red eye of HAL9000 from Kubrick's '2001: A Space Odyssey.' The sky behind this scene has been blended with a 'code waterfall' effect as seen in the credit sequences of the Wachowskis' 'Matrix' movies.  Image: Cryteria (modified) https://commons.wikimedia.org/wiki/File:HAL9000.svg  CC BY 3.0 https://creativecommons.org/licenses/by/3.0/deed.en  --  Catherine Poh Huay Tan (modified) https://www.flickr.com/photos/68166820@N08/49729911222/  Laia Balagueró (modified) https://www.flickr.com/photos/lbalaguero/6551235503/  CC BY 2.0 https://creativecommons.org/licenses/by/2.0/ALT

I’m in TARTU, ESTONIA! AI, copyright and creative workers’ labor rights (TOMORROW, May 10, 8AM: Science Fiction Research Association talk, Institute of Foreign Languages and Cultures building, Lossi 3, lobby). A talk for hackers on seizing the means of computation (TOMORROW, May 10, 3PM, University of Tartu Delta Centre, Narva 18, room 1037).

image

Fun fact: “The Tragedy Of the Commons” is a hoax created by the white nationalist Garrett Hardin to justify stealing land from colonized people and moving it from collective ownership, “rescuing” it from the inevitable tragedy by putting it in the hands of a private owner, who will care for it properly, thanks to “rational self-interest”:

https://pluralistic.net/2023/05/04/analytical-democratic-theory/#epistocratic-delusions

Get that? If control over a key resource is diffused among the people who rely on it, then (Garrett claims) those people will all behave like selfish assholes, overusing and undermaintaining the commons. It’s only when we let someone own that commons and charge rent for its use that (Hardin says) we will get sound management.

By that logic, Google should be the internet’s most competent and reliable manager. After all, the company used its access to the capital markets to buy control over the internet, spending billions every year to make sure that you never try a search-engine other than its own, thus guaranteeing it a 90% market share:

https://pluralistic.net/2024/02/21/im-feeling-unlucky/#not-up-to-the-task

Google seems to think it’s got the problem of deciding what we see on the internet licked. Otherwise, why would the company flush $80b down the toilet with a giant stock-buyback, and then do multiple waves of mass layoffs, from last year’s 12,000 person bloodbath to this year’s deep cuts to the company’s “core teams”?

https://qz.com/google-is-laying-off-hundreds-as-it-moves-core-jobs-abr-1851449528

And yet, Google is overrun with scams and spam, which find their way to the very top of the first page of its search results:

https://pluralistic.net/2023/02/24/passive-income/#swiss-cheese-security

The entire internet is shaped by Google’s decisions about what shows up on that first page of listings. When Google decided to prioritize shopping site results over informative discussions and other possible matches, the entire internet shifted its focus to producing affiliate-link-strewn “reviews” that would show up on Google’s front door:

https://pluralistic.net/2024/04/24/naming-names/#prabhakar-raghavan

This was catnip to the kind of sociopath who a) owns a hedge-fund and b) hates journalists for being pain-in-the-ass, stick-in-the-mud sticklers for “truth” and “facts” and other impediments to the care and maintenance of a functional reality-distortion field. These dickheads started buying up beloved news sites and converting them to spam-farms, filled with garbage “reviews” and other Google-pleasing, affiliate-fee-generating nonsense.

(These news-sites were vulnerable to acquisition in large part thanks to Google, whose dominance of ad-tech lets it cream 51 cents off every ad dollar and whose mobile OS monopoly lets it steal 30 cents off every in-app subscriber dollar):

https://www.eff.org/deeplinks/2023/04/saving-news-big-tech

Keep reading

Cigna’s nopeinator

An existential plane extending to an abstract background. Scattered through the scene are mainframes and control panels, being worked by faceless figure. In the center stands a downcast MD in old-fashioned scrubs. In the foreground to the right is an impatient older man in a business suit, staring at his watch and brandishing a sheaf of papers. In the background left is a grim reaper figure raising a glass of blood in a toast, the blood spattering his robes. In the center background in large magnetic 'computer font' lettering is the word 'NO.'ALT

I’m touring my new, nationally bestselling novel The Bezzle! Catch me THURSDAY (May 2) in WINNIPEG, then Calgary (May 3), Vancouver (May 4), Tartu, Estonia, and beyond!

A yellow rectangle. On the left, in blue, are the words 'Cory Doctorow.' On the right, in black, is 'The Bezzle.' Between them is the motif from the cover of *The Bezzle*: an escheresque impossible triangle. The center of the triangle is a barred, smaller triangle that imprisons a silhouetted male figure in a suit. Two other male silhouettes in suits run alongside the top edges of the triangle.ALT

Cigna – like all private health insurers – has two contradictory imperatives:

  1. To keep its customers healthy; and
  2. To make as much money for its shareholders as is possible.

Now, there’s a hypothetical way to resolve these contradictions, a story much beloved by advocates of America’s wasteful, cruel, inefficient private health industry: “If health is a "market,” then a health insurer that fails to keep its customers healthy will lose those customers and thus make less for its shareholders.“ In this thought-experiment, Cigna will "find an equilibrium” between spending money to keep its customers healthy, thus retaining their business, and also “seeking efficiencies” to create a standard of care that’s cost-effective.

But health care isn’t a market. Most of us get our health-care through our employers, who offer small handful of options that nevertheless manage to be so complex in their particulars that they’re impossible to directly compare, and somehow all end up not covering the things we need them for. Oh, and you can only change insurers once or twice per year, and doing so incurs savage switching costs, like losing access to your family doctor and specialists providers.

Cigna – like other health insurers – is “too big to care.” It doesn’t have to worry about losing your business, so it grows progressively less interested in even pretending to keep you healthy.

The most important way for an insurer to protect its profits at the expense of your health is to deny care that your doctor believes you need. Cigna has transformed itself into a care-denying assembly line.

Dr Debby Day is a Cigna whistleblower. Dr Day was a Cigna medical director, charged with reviewing denied cases, a job she held for 20 years. In 2022, she was forced out by Cigna. Writing for Propublica and The Capitol Forum, Patrick Rucker and David Armstrong tell her story, revealing the true “equilibrium” that Cigna has found:

https://www.propublica.org/article/cigna-medical-director-doctor-patient-preapproval-denials-insurance

Dr Day took her job seriously. Early in her career, she discovered a pattern of claims from doctors for an expensive therapy called intravenous immunoglobulin in cases where this made no medical sense. Dr Day reviewed the scientific literature on IVIG and developed a Cigna-wide policy for its use that saved the company millions of dollars.

This is how it’s supposed to work: insurers (whether private or public) should permit all the medically necessary interventions and deny interventions that aren’t supported by evidence, and they should determine the difference through internal reviewers who are treated as independent experts.

But as the competitive landscape for US healthcare dwindled – and as Cigna bought out more parts of its supply chain and merged with more of its major rivals – the company became uniquely focused on denying claims, irrespective of their medical merit.

In Dr Day’s story, the turning point came when Cinga outsourced pre-approvals to registered nurses in the Philippines. Legally, a nurse can approve a claim, but only an MD can deny a claim. So Dr Day and her colleagues would have to sign off when a nurse deemed a procedure, therapy or drug to be medically unnecessary.

This is a complex determination to make, even under ideal circumstances, but Cigna’s Filipino outsource partners were far from ideal. Dr Day found that nurses were “sloppy” – they’d confuse a mother with her newborn baby and deny care on that grounds, or confuse an injured hip with an injured neck and deny permission for an ultrasound. Dr Day reviewed a claim for a test that was denied because STI tests weren’t “medically necessary” – but the patient’s doctor had applied for a test to diagnose a toenail fungus, not an STI.

Even if the nurses’ evaluations had been careful, Dr Day wanted to conduct her own, thorough investigation before overriding another doctor’s judgment about the care that doctor’s patient warranted. When a nurse recommended denying care “for a cancer patient or a sick baby,” Dr Day would research medical guidelines, read studies and review the patient’s record before signing off on the recommendation.

This was how the claims denial process is said to work, but it’s not how it was supposed to work. Dr Day was markedly slower than her peers, who would “click and close” claims by pasting the nurses’ own rationale for denying the claim into the relevant form, acting as a rubber-stamp rather than a skilled reviewer.

Keep reading

“Humans in the loop” must detect the hardest-to-spot errors, at superhuman speed

image


I’m touring my new, nationally bestselling novel The Bezzle! Catch me SATURDAY (Apr 27) in MARIN COUNTY, then Winnipeg (May 2), Calgary (May 3), Vancouver (May 4), and beyond!

A yellow rectangle. On the left, in blue, are the words 'Cory Doctorow.' On the right, in black, is 'The Bezzle.' Between them is the motif from the cover of *The Bezzle*: an escheresque impossible triangle. The center of the triangle is a barred, smaller triangle that imprisons a silhouetted male figure in a suit. Two other male silhouettes in suits run alongside the top edges of the triangle.ALT

If AI has a future (a big if), it will have to be economically viable. An industry can’t spend 1,700% more on Nvidia chips than it earns indefinitely – not even with Nvidia being a principle investor in its largest customers:

https://news.ycombinator.com/item?id=39883571

A company that pays 0.36-1 cents/query for electricity and (scarce, fresh) water can’t indefinitely give those queries away by the millions to people who are expected to revise those queries dozens of times before eliciting the perfect botshit rendition of “instructions for removing a grilled cheese sandwich from a VCR in the style of the King James Bible”:

https://www.semianalysis.com/p/the-inference-cost-of-search-disruption

Eventually, the industry will have to uncover some mix of applications that will cover its operating costs, if only to keep the lights on in the face of investor disillusionment (this isn’t optional – investor disillusionment is an inevitable part of every bubble).

Now, there are lots of low-stakes applications for AI that can run just fine on the current AI technology, despite its many – and seemingly inescapable - errors (“hallucinations”). People who use AI to generate illustrations of their D&D characters engaged in epic adventures from their previous gaming session don’t care about the odd extra finger. If the chatbot powering a tourist’s automatic text-to-translation-to-speech phone tool gets a few words wrong, it’s still much better than the alternative of speaking slowly and loudly in your own language while making emphatic hand-gestures.

There are lots of these applications, and many of the people who benefit from them would doubtless pay something for them. The problem – from an AI company’s perspective – is that these aren’t just low-stakes, they’re also low-value. Their users would pay something for them, but not very much.

For AI to keep its servers on through the coming trough of disillusionment, it will have to locate high-value applications, too. Economically speaking, the function of low-value applications is to soak up excess capacity and produce value at the margins after the high-value applications pay the bills. Low-value applications are a side-dish, like the coach seats on an airplane whose total operating expenses are paid by the business class passengers up front. Without the principle income from high-value applications, the servers shut down, and the low-value applications disappear:

https://locusmag.com/2023/12/commentary-cory-doctorow-what-kind-of-bubble-is-ai/

Now, there are lots of high-value applications the AI industry has identified for its products. Broadly speaking, these high-value applications share the same problem: they are all high-stakes, which means they are very sensitive to errors. Mistakes made by apps that produce code, drive cars, or identify cancerous masses on chest X-rays are extremely consequential.

Some businesses may be insensitive to those consequences. Air Canada replaced its human customer service staff with chatbots that just lied to passengers, stealing hundreds of dollars from them in the process. But the process for getting your money back after you are defrauded by Air Canada’s chatbot is so onerous that only one passenger has bothered to go through it, spending ten weeks exhausting all of Air Canada’s internal review mechanisms before fighting his case for weeks more at the regulator:

https://bc.ctvnews.ca/air-canada-s-chatbot-gave-a-b-c-man-the-wrong-information-now-the-airline-has-to-pay-for-the-mistake-1.6769454

There’s never just one ant. If this guy was defrauded by an AC chatbot, so were hundreds or thousands of other fliers. Air Canada doesn’t have to pay them back. Air Canada is tacitly asserting that, as the country’s flagship carrier and near-monopolist, it is too big to fail and too big to jail, which means it’s too big to care.

Air Canada shows that for some business customers, AI doesn’t need to be able to do a worker’s job in order to be a smart purchase: a chatbot can replace a worker, fail to their worker’s job, and still save the company money on balance.

Keep reading

“But the reality is that you can’t build a hundred-billion-dollar industry around a technology that’s kind of useful, mostly in mundane ways, and that boasts perhaps small increases in productivity if and only if the people who use it fully understand its limitations. And you certainly can’t justify the kind of exploitation, extraction, and environmental cost that the industry has been mostly getting away with, in part because people have believed their lofty promises of someday changing the world.”

-Molly White

“I find my feelings about AI are actually pretty similar to my feelings about blockchains: they do a poor job of much of what people try to do with them, they can’t do the things their creators claim they one day might, and many of the things they are well suited to do may not be altogether that beneficial. And while I do think that AI tools are more broadly useful than blockchains, they also come with similarly monstrous costs.”

-Molly White