Friendly Google and Enemy Remedies

Railroads were, in theory, an attractive business: while they cost a lot to build, once built, the marginal cost of carrying additional goods was extremely low; sure, you needed to actually run a train, which need fuel and workers and which depreciated the equipment involved, but those costs were minimal compared to the revenue that could be earned from carrying goods for customers that had to pay to gain access to the national market unlocked by said railroads.

The problem for railroad entrepreneurs in the 19th century is that they were legion: as locomotion technology advanced and became standardized, and steel production became a massive industry in its own right, fierce competition arose to tie the sprawling United States together. This was a problem for those seemingly attractive railroad economics: sure, marginal costs were low, but that meant a race to the bottom in terms of pricing (which is based on covering marginal costs, ergo, low marginal costs mean a low floor in terms of the market clearing price). It was the investors in railroads — the ones who paid the upfront cost of building the tracks in the hope of large profits on low marginal cost service, and who were often competing for (and ultimately with) government land grants and subsidies, further fueling the boom — that were left holding the bag.

This story, like so many technological revolutions, culminated in a crash, in this case the Panic of 1873. The triggering factor for the Panic of 1873 was actually currency, as the U.S. responded to a German decision to no longer mint silver coins by changing its policy of backing the U.S. dollar with both silver and gold to gold only, which dramatically tightened the money supply, leading to a steep rise in interest rates. This was a big problem for railway financiers who could no longer service their debts; their bankruptcies led to a slew of failed banks and even the temporary closure of the New York Stock Exchange, which rippled through the economy, leading to a multi-year depression and the failure of over 100 railroads within the following year.

Meanwhile, oil, then used primarily to refine kerosene for lighting, was discovered in Titusville, Pennsylvania in 1859, Bradford, Pennsylvania in 1871, and Lima, Ohio in 1885. The most efficient refineries in the world, thanks to both vertical integration and innovative product creation using waste products from the kerosene refinement process, including the novel use of gasoline as a power source, were run by John D. Rockefeller in Cleveland. Rockefeller’s efficiencies led to a massive boom in demand for kerosene lighting, that Rockefeller was determined to meet; his price advantage — driven by innovation — allowed him to undercut competitors, forcing them to sell to Rockefeller, who would then increase their efficiency, furthering the supply of cheap kerosene, which drove even more demand.

This entire process entailed moving a lot of products around in bulk: oil needed to be shipped to refineries, and kerosene to burgeoning cities through the Midwest and east coast. This was a godsend to the struggling railroad industry: instead of struggling to fill trains from small-scale and sporadic shippers, they signed long-term contracts with Standard Oil; guaranteed oil transportation covered marginal costs, freeing up the railroads to charge higher profit-making rates on those small-scale and sporadic shippers. Those contracts, in turn, gave Standard Oil a durable price advantage in terms of kerosene: having bought up the entire Ohio refining industry through a price advantage earned through innovation and efficiency, Standard Oil was now in a position to do the same to the entire country through a price advantage gained through contracts with railroad companies.

The Sherman Antitrust Act

There were, to be clear, massive consumer benefits to Rockefeller’s actions: Standard Oil, more than any other entity, brought literal light to the masses, even if the masses didn’t fully understand the ways in which they benefited from Rockefeller’s machinations; it was the people who understood the costs — particularly the small businesses and farmers of the Midwest generally, and Ohio in particular — who raised a ruckus. They were the “small-scale and sporadic shippers” I referenced above, and the fact that they had to pay far more for railroad transportation in a Standard Oil world than they had in the previous period of speculation and over-investment caught the attention of politicians, particularly Senator John Sherman of Ohio.

Senator Sherman had not previously shown a huge amount of interest in the issue of monopoly and trusts, but he did have oft-defeated presidential aspirations, and seized on the discontent with Standard Oil and the railroads to revive a bill originally authored by a Vermont Senator named George Edmunds; the relevant sections of the Sherman Antitrust Act were short and sweet and targeted squarely at Standard Oil’s contractual machinations:

Sec. 1. Every contract, combination in the form of trust or otherwise, or conspiracy, in restraint of trade or commerce among the several States, or with foreign nations, is hereby declared to be illegal. Every person who shall make any such contract or engage in any such combination or conspiracy, shall be deemed guilty of a misdemeanor, and, on conviction thereof, shall be punished by fine not exceeding five thousand dollars, or by imprisonment not exceeding one year, or by both said punishments, at the discretion of the court.

Sec. 2. Every person who shall monopolize, or attempt to monopolize, or combine or conspire with any other person or persons, to monopolize any part of the trade or commerce among the several States, or with foreign nations, shall be deemed guilty of a misdemeanor, and, on conviction thereof; shall be punished by fine not exceeding five thousand dollars, or by imprisonment not exceeding one year, or by both said punishments, in the discretion of the court.

And so we arrive at Google.

The Google Case

Yesterday, from the Wall Street Journal:

A federal judge ruled that Google engaged in illegal practices to preserve its search engine monopoly, delivering a major antitrust victory to the Justice Department in its effort to rein in Silicon Valley technology giants. Google, which performs about 90% of the world’s internet searches, exploited its market dominance to stomp out competitors, U.S. District Judge Amit P. Mehta in Washington, D.C. said in the long-awaited ruling.

“Google is a monopolist, and it has acted as one to maintain its monopoly,” Mehta wrote in his 276-page decision released Monday, in which he also faulted the company for destroying internal messages that could have been useful in the case. Mehta agreed with the central argument made by the Justice Department and 38 states and territories that Google suppressed competition by paying billions of dollars to operators of web browsers and phone manufacturers to be their default search engine. That allowed the company to maintain a dominant position in the sponsored text advertising that accompanies search results, Mehta said.

While there have been a number of antitrust laws passed by Congress, most notably the Clayton Antitrust Act of 1914 and Federal Trade Commission Act of 1914, the Google case is directly downstream of the Sherman Act, specifically Section 2, and its associated jurisprudence. Judge Mehta wrote in his 286-page opinion:

“Section 2 of the Sherman Act makes it unlawful for a firm to ‘monopolize.’” United States v. Microsoft, 253 F.3d 34, 50 (D.C. Cir. 2001) (citing 15 U.S.C. § 2). The offense of monopolization requires proof of two elements: “(1) the possession of monopoly power in the relevant market and (2) the willful acquisition or maintenance of that power as distinguished from growth or development as a consequence of a superior product, business acumen, or historic accident.” United States v. Grinnell Corp., 384 U.S. 563, 570–71 (1966).

Note that Microsoft reference: the 1990s antitrust case provides the analytical framework Mehta used in this case.

The court structures its conclusions of law consistent with Microsoft’s analytical framework. After first summarizing the principles governing market definition, infra Section II.A, the court in Section II.B addresses whether general search services is a relevant product market, and finding that it is, then evaluates in Section II.C whether Google has monopoly power in that market. In Part III, the court considers the three proposed advertiser-side markets. The court finds that Plaintiffs have established two relevant markets — search advertising and general search text advertising — but that Google possesses monopoly power only in the narrower market for general search text advertising. All parties agree that the relevant geographic market is the United States.

The court then determines whether Google has engaged in exclusionary conduct in the relevant product markets. Plaintiffs’ primary theory centers on Google’s distribution agreements with browser developers, OEMs, and carriers. The court first addresses in Part IV whether the distribution agreements are exclusive under Microsoft. Finding that they are, the court then analyzes in Parts V and VI whether the contracts have anticompetitive effects and procompetitive justifications in each market. For reasons that will become evident, the court does not reach the balancing of anticompetitive effects and procompetitive justifications. Ultimately, the court concludes that Google’s exclusive distribution agreements have contributed to Google’s maintenance of its monopoly power in two relevant markets: general search services and general search text advertising.

I find Mehta’s opinion well-written and exhaustive, but the decision is ultimately as simple as the Sherman Act: Google acquired a monopoly in search through innovation, but having achieved a monopoly, it is forbidden from extending that monopoly through the use of contractual arrangements like the default search deals it has with browser developers, device makers, and carriers. That’s it!

Aggregators and Contracts

To me this simplicity is the key to the case, and why I argued from the get-go that the Department of Justice was taking a far more rational approach to prosecuting a big tech monopoly than the FTC or European Commission had been. From a 2020 Stratechery Article entitled United States v. Google:

The problem with the vast majority of antitrust complaints about big tech generally, and online services specifically, is that Page is right [about competition only being a click away]. You may only have one choice of cable company or phone service or any number of physical goods and real-world services, but on the Internet everything is just zero marginal bits.

That, though, means there is an abundance of data, and Google helps consumers manage that abundance better than anyone. This, in turn, leads Google’s suppliers to work to make Google better — what is SEO but a collective effort by basically the entire Internet to ensure that Google’s search engine is as good as possible? — which attracts more consumers, which drives suppliers to work even harder in a virtuous cycle. Meanwhile, Google is collecting information from all of those consumers, particularly what results they click on for which searches, to continuously hone its accuracy and relevance, making the product that much better, attracting that many more end users, in another virtuous cycle:

Google benefits from two virtuous cycles

One of the central ongoing projects of this site has been to argue that this phenomenon, which I call Aggregation Theory, is endemic to digital markets…In short, increased digitization leads to increased centralization (the opposite of what many originally assumed about the Internet). It also provides a lot of consumer benefit — again, Aggregators win by building ever better products for consumers — which is why Aggregators are broadly popular in a way that traditional monopolists are not…

The solution, to be clear, is not simply throwing one’s hands up in the air and despairing that nothing can be done. It is nearly impossible to break up an Aggregator’s virtuous cycle once it is spinning, both because there isn’t a good legal case to do so (again, consumers are benefitting!), and because the cycle itself is so strong. What regulators can do, though, is prevent Aggregators from artificially enhancing their natural advantages…

That is exactly what this case was about:

This is exactly why I am so pleased to see how narrowly focused the Justice Department’s lawsuit is: instead of trying to argue that Google should not make search results better, the Justice Department is arguing that Google, given its inherent advantages as a monopoly, should have to win on the merits of its product, not the inevitably larger size of its revenue share agreements. In other words, Google can enjoy the natural fruits of being an Aggregator, it just can’t use artificial means — in this case contracts — to extend that inherent advantage.

I laid out these principles in 2019’s A Framework for Regulating Competition on the Internet, and it was this framework that led me to support the DOJ’s case initially, and applaud Judge Mehta’s decision today.

Mehta’s decision, though, is only about liability: now comes the question of remedies, and the truly difficult questions for me and my frameworks.

Friendly Google

The reason to start this Article with railroads and Rockefeller and the history of the Sherman Antitrust Act is not simply to provide context for this case; rather, it’s important to understand that antitrust is inherently political, which is another way of saying it’s not some sort of morality play with clearly distinguishable heroes and villains. In the case of Standard Oil, the ultimate dispute was between the small business owners and farmers of the Midwest and city dwellers who could light their homes thanks to cheap kerosene. To assume that Rockefeller was nothing but a villain is to deny the ways in which his drive for efficiency created entirely new markets that resulted in large amounts of consumer welfare; moreover, there is an argument that Standard Oil actually benefited its political enemies as well, by stabilizing and standardizing the railroad industry that they ultimately resented paying for.

Indeed, there are some who argue, even today, that all of antitrust law is misguided, because like all centralized interference in markets, it fails to properly balance the costs and benefits of interference with those markets. To go back to the Standard Oil example, those who benefited from cheap kerosene were not politically motivated to defend Rockefeller, but their welfare was in fact properly weighed by the market forces that resulted in Rockefeller’s dominance. Ultimately, though, this is a theoretical argument, because politics do matter, and Sherman tapped into a deep and longstanding discomfort in American society with dominant entities like Standard Oil then, and Google today; that’s why the Sherman Antitrust Act passed by a vote of 51-1 in the Senate, and 242-0 in the House.

Then something funny happened: Standard Oil was indeed prosecuted under the Sherman Antitrust Act, and ordered to be broken up into 34 distinct companies; Rockefeller had long since retired from active management at that point, but still owned 25% of the company, and thus 25% of the post-breakup companies. Those companies, once listed, ended up being worth double what they were as Standard Oil; Rockefeller ended up richer than ever. Moreover, it was those companies, like Exxon, that ended up driving a massive increase in oil by expanding from refining to exploration all over the world.

The drivers of that paradox are evident in the consideration of remedies for Google. One possibility is a European Commission-style search engine choice screen for consumers setting up new browsers or devices: is there any doubt that the vast majority of people will choose Google, meaning Google keeps its share and gets to keep the money it gives Apple and everyone else? Another is that Google is barred from bidding for default placement, but other search engines can: that will put entities like Apple in the uncomfortable position of either setting what it considers the best search engine as the default, and making no money for doing so, or prioritizing a revenue share from an also-ran like Bing — and potentially seeing customers go to Google anyways. The iPhone maker could even go so far as to build its own search engine, and seek to profit directly from the search results driven by its distribution advantage, but that entails tremendous risk and expense on the part of the iPhone maker, and potentially favors Android.

That, though, was the point: the cleverness of Google’s strategy was their focus on making friends instead of enemies, thus motivating Apple in particular to not even try. I told Michael Nathanson and Craig Moffett when asked in a recent interview why Apple doesn’t build a search engine:

Apple already has a partnership with Google, the Google-Apple partnership is really just a beautiful manifestation of how, don’t-call-them-monopolies, can really scratch each other’s back in a favorable way, such that Google search makes up something like 17% or 20% of Apple’s profit, it’s basically pure profit for Apple and people always talk about, “When’s Apple going to make a search engine?” — the answer is never. Why would they? They get the best search engine, they get profits from that search engine without having to drop a dime of investment, they get to maintain their privacy brand and say bad things about data-driven advertising, while basically getting paid by what they claim to hate, because Google is just laundering it for them. Google meanwhile gets the scale, there’s no points of entry for potential competition, it makes total sense.

This wasn’t always Google’s approach; in the early years of the smartphone era the company had designs on Android surpassing the iPhone, and it was a whole-company effort. That, mistakenly in my view, at least from a business perspective, meant using Google’s services — specifically Google Maps — to differentiate Android, including shipping turn-by-turn directions on Android only, and demanding huge amounts of user data from Apple to maintain an inferior product for the iPhone.

Apple’s response was shocking at the time: the company would build its own Maps product, even though that meant short term embarrassment. It was also effective, as evidenced by testimony in this case. From Bloomberg last fall:

Two years after Apple Inc. dropped Google Maps as its default service on iPhones in favor of its own app, Google had regained only 40% of the mobile traffic it used to have on its mapping service, a Google executive testified in the antitrust trial against the Alphabet Inc. company. Michael Roszak, Google’s vice president for finance, said Tuesday that the company used the Apple Maps switch as “a data point” when modeling what might happen if the iPhone maker replaced Google’s search engine as the default on Apple’s Safari browser.

The lesson Google learned was that Apple’s distribution advantages mattered a lot, which by extension meant it was better to be Apple’s friend than its enemy. I wrote in an Update after that revelation:

This does raise a question I get frequently: how can I argue that Google wins by being better when it is willing to pay for default status? I articulated the answer on a recent episode of Sharp Tech, but briefly, nothing exists in a vacuum: defaults do matter, and that absolutely impacts how much better you have to be to force a switch. In this case Google took the possibility off of the table completely, and it was a pretty rational decision in my mind.

It also, without question, reduced competition in the space, which is why I always thought this was a case worth taking to court. This is in fact a case where I think even a loss is worthwhile, because I find contracts between Aggregators to be particularly problematic. Ultimately, though, my objection to this arrangement is just as much, if not more, about Apple and its power. They are the ones with the power to set the defaults, and they are the ones taking the money instead of competing; it’s hard to fault Google for being willing to pay up.

Tech companies, particularly advertising-based ones, famously generate huge amounts of consumer surplus. Yes, Google makes a lot of money showing you ads, but even at a $300+ billion run rate, the company is surely generating far more value for consumers than it is capturing. That is in itself some degree of defense for the company, I should note, much as Standard Oil brought light to every level of society; what is notable about these contractual agreements, though, is how Google has been generating surplus for everyone else in the tech industry.

Maybe this is a good thing; it’s certainly good for Mozilla, which gets around 80% of its revenue from its Google deal. It has been good for device makers, commoditized by Android, who have an opportunity for scraps of profit. It has certainly been profitable for Apple, which has seen its high-margin services revenue skyrocket, thanks in part to the ~$20 billion per year of pure profit it gets from Google without needing to make any level of commensurate investment.

Enemy Remedies

However, has it been good for Google, not just in terms of the traffic acquisition costs it pays out, but also in terms of the company’s maintenance of the drive that gave it its dominant position in the first place? It’s a lot easier to pay off your would-be competitors than it is to innovate. I’m hesitant to say that antitrust is good for its subjects, but Google does make you wonder.

Most importantly, has it been good for consumers? This is where the Apple Maps example looms large: Apple has shown it can compete with Google if it puts resources behind a project it considers core to the iPhone experience. By extension, the entire reason why Google favored Google Maps in the first place, leaving Apple no choice but to compete, is because they were seeking to advantage Android relative to the iPhone. Both competitions drove large amounts of consumer benefit that continue to persist today.

I would also note that the behavior I am calling for — more innovation and competition, not just from Google’s competitors, but Google itself — is the exact opposite of what the European Union is pushing for, which is product stasis. I think the E.U. is mistaken for the exact same reasons I think Judge Mehta is right.

There’s also the political point: I am an American, and I share the societal sense of discomfort in dominant entities that made the Sherman Antitrust Act law in the first place; yes, it’s possible this decision doesn’t mean much in the end, but it’s pushing in a direction that is worth leaning into

This is why, ultimately, I am comfortable with the implications of my framework, and why I think the answer to the remedy question is an injunction against Google making any sort of payments or revenue share for search; if you’re a monopoly you don’t get to extend your advantage with contracts, period (now do most-favored nation clauses). More broadly, we tend to think of monopolies as being mean; the problem with Aggregators is they have the temptation to be too nice. It has been very profitable to be Google’s friend; I think consumers — and Google — are better off if the company has a few more enemies.

I wrote a follow-up to this Article in this Daily Update.

Crashes and Competition

This Article is available as a video essay on YouTube


I’ve long maintained that if the powers-that-be understood what the Internet’s impact would be, they would have never allowed it to be created. It’s hard to accuse said shadowy figures of negligence, however, given how clueless technologists were as well; look no further than an operating system like Windows.

Windows was, from the beginning, well and truly open: 3rd-party developers could do anything, including “patching the kernel”; to briefly summarize:

  • The “kernel” of an operating system is the core of the operating system, the function of which is to manage the actual hardware of a computer. All software running in the kernel is fully privileged, which is to say it operates without any restrictions. If software crashes in the kernel, the entire computer crashes.
  • Everything else on a computer runs in “user space”; user space effectively sits on top of the kernel, and is dependent on APIs provided by the operating system maker to compel software in kernel space to actually interface with the hardware. If software crashes in user space the rest of the computer is fine.

This is a drastically simplified explanation; in some operating systems there are levels of access between kernel space and user space for things like drivers (which need direct access to the hardware they are controlling, but not necessarily hardware access to the entire computer), and on the other side of things significant limitations on software in user space (apps, for example, might be “sandboxed” and unable to access other information on the computer, even if it is in user space).

The key point for purposes of this Article, though, is that Windows allowed access to both kernel space and user space; yes, the company certainly preferred that developers only operate in user space, and the company never officially supported applications that patched the kernel, but the reality is that operating in kernel space is far more powerful and so a lot of developers would do just that.

Security Companies and Kernel Access

An example of developers with a legitimate argument for access to kernel space are security companies: Windows’ openness extended beyond easy access to kernel space; the reason why sandboxing became a core security feature of newer operating systems like iOS is that not all developers were good actors: virus and malware makers on Windows in particular would leverage easy access to other programs to infiltrate computers and make them nigh on unusable at best, and exfiltrate data or use computers they took over to attack others at worse.

The goal of security software like antivirus programs or malware scanners was to catch these bad actors and eliminate them; the best way to do so was to patch the kernel and so operate at the lowest, most powerful layer of Windows, with full visibility and access to every other program running on the computer. And, to be clear, in the 2000s, when viruses and malware were at their peak, this was very much necessary — and necessary is another way of saying this was a clear business opportunity.

Two of the companies seizing this opportunity in the 2000s were Symantec and McAfee; both reacted with outrage in 2005 and 2006 when Microsoft, in the run-up to the release of Windows Vista, introduced PatchGuard. PatchGuard was aptly named: it guarded the kernel from being patched by 3rd-parties, with the goal of increasing security. This, though, was a threat to Symantec and McAfee; George Amenuk, CEO of the latter, released an open letter that stated:

Over the years, the most reliable defenders against the many, many vulnerabilities in the Microsoft operating systems have been the independent security companies such as McAfee. Yet, if Microsoft succeeds in its latest effort to hamstring these competitors, computers everywhere could be less secure. Computers are more secure today, thanks to relentless innovations by the security providers. Microsoft also has helped by allowing these companies’ products full access to system resources-this has enabled the security products to better “see” threats and deploy defenses against viruses and other attacks.

With its upcoming Vista operating system, Microsoft is embracing the flawed logic that computers will be more secure if it stops cooperating with the independent security firms. For the first time, Microsoft shut off security providers’ access to the core of its operating system – what is known as the “kernel”.

At the same time, Microsoft has firmly embedded in Vista its own Windows Security Center-a product that cannot be disabled even when the user purchases an alternative security solution. This approach results in confusion for customers and prevents genuine freedom of choice. Microsoft seems to envision a world in which one giant company not only controls the systems that drive most computers around the world but also the security that protects those computers from viruses and other online threats. Only one approach protecting us all: when it fails, it fails for 97% of the world’s desktops.

Symantec, meanwhile, went straight to E.U. regulators, making the case that Microsoft, already in trouble over its inclusion of Internet Explorer in the 90s, and Windows Media Player in the early 2000s, was unfairly limiting competition for security offerings. The E.U. agreed and Microsoft soon backed down; from Silicon.com in 2006:

Microsoft has announced it will give security software makers technology to access the kernel of 64-bit versions of Vista for security-monitoring purposes. But its security rivals remain as yet unconvinced. Redmond also said it will make it possible for security companies to disable certain parts of the Windows Security Center in Vista when a third-party security console is installed. Microsoft made both changes in response to antitrust concerns from the European Commission. Led by Symantec, the world’s largest antivirus software maker, security companies had publicly criticised Microsoft over both Vista features and also talked to European competition officials about their gripes.

Fast forward nearly two decades, and while Symantec and McAfee are still around, there is a new wave of cloud-based security companies that dominate the space, including CrowdStrike; Windows is much more secure than it used to be, but after the disastrous 2000s, a wave of regulations were imposed on companies requiring them to adhere to a host of requirements that are best met by subscribing to an all-in-one solution that checks all of the relevant boxes, and CrowdStrike fits the bill. What is the same is kernel-level access, and that brings us to last week’s disaster.

The CrowdStrike Crash

On Friday, from The Verge:

Thousands of Windows machines are experiencing a Blue Screen of Death (BSOD) issue at boot today, impacting banks, airlines, TV broadcasters, supermarkets, and many more businesses worldwide. A faulty update from cybersecurity provider CrowdStrike is knocking affected PCs and servers offline, forcing them into a recovery boot loop so machines can’t start properly. The issue is not being caused by Microsoft but by third-party CrowdStrike software that’s widely used by many businesses worldwide for managing the security of Windows PCs and servers.

On Saturday, from the CrowdStrike blog:

On July 19, 2024 at 04:09 UTC, as part of ongoing operations, CrowdStrike released a sensor configuration update to Windows systems. Sensor configuration updates are an ongoing part of the protection mechanisms of the Falcon platform. This configuration update triggered a logic error resulting in a system crash and blue screen (BSOD) on impacted systems. The sensor configuration update that caused the system crash was remediated on Friday, July 19, 2024 05:27 UTC. This issue is not the result of or related to a cyberattack.

In any massive failure there are a host of smaller errors that compound; in this case, CrowdStrike created a faulty file, failed to test it properly, and deployed it to its entire customer base in one shot, instead of rolling it out in batches. Doing something different at each one of these steps would have prevented the widespread failures that are still roiling the world (and will for some time to come, given that the fix requires individual action on every affected computer, since the computer can’t stay running long enough to run a remotely delivered fix).

The real issue, though, is more fundamental: erroneous configuration files in userspace crash a program, but they don’t crash the computer; CrowdStrike, though, doesn’t run in userspace: it runs in kernel space, which means its bugs crash the entire computer — 8 million of them, according to Microsoft. Apple and Linux were not impacted, for a very obvious reason: both have long since locked out 3rd-party software from kernel space.

Microsoft, though, despite having tried to do just that in the 2000s, has its hands tied; from the Wall Street Journal:

A Microsoft spokesman said it cannot legally wall off its operating system in the same way Apple does because of an understanding it reached with the European Commission following a complaint. In 2009, Microsoft agreed it would give makers of security software the same level of access to Windows that Microsoft gets.

I wasn’t able to find the specifics around the agreement Microsoft made with the European Commission; the company did agree to implement a browser choice screen in December 2009, along with a commitment to interoperability for its “high-share software products” including Windows. What I do know is that a complaint about kernel level access was filed by Symantec, that Microsoft was under widespread antitrust pressure by regulators, and, well, that a mistake by CrowdStrike rendered millions of computers inoperable because CrowdStrike has kernel access.

Microsoft’s Handicap

On Friday afternoon, FTC Chair Lina Khan tweeted:

This is wrong on a couple of levels, but the ways in which it is wrong are worth examining because of what they mean for security specifically and tech regulation broadly.

First, this outage was the system working as regulators intended: 99% of Windows computers were not affected, just those secured by CrowdStrike; to go back to that 2006 open letter from the McAfee CEO:

We think customers large and small are right to rely on the innovation arising from the intense competition between diverse and independent security companies. Companies like McAfee have none of the conflicts of interest deriving from ownership of the operating system. We focus purely on security. Independent security developers have proven to be the most powerful weapon in the struggle against those who prey on weak computers. Computer users around the globe recognize that the most serious threats to security exist because of inherent weaknesses in the Microsoft operating system. We believe they should demand better of Microsoft.

For starters, customers should recognize that Microsoft is being completely unrealistic if, by locking security companies out of the kernel, it thinks hackers won’t crack Vista’s kernel. In fact, they already have. What’s more, few threats actually target the kernel – they target programs or applications. Yet the unfettered access previously enjoyed by security providers has been a key part of keeping those programs and applications safe from hackers and malicious software. Total access for developers has meant better protection for customers.

That argument may be correct; the question this episode raises, though, is what is the appropriate level of abstraction to evaluate risk? The McAfee CEO’s argument is that most threats are targeting userspace, which is why security developers deserve access to kernel space to root them out; again, I think this argument is probably correct in a narrow sense — it was definitely correct in the malware-infested 2000s — but what is a bigger systemic problem, malware and viruses on functioning computers, or computers that can’t even turn on?

Second, while Khan’s tweets didn’t mention Microsoft specifically, it seems obvious that is the company she was referring to; after all, CrowdStrike, who was actually to blame, is apparently only on 1% of Windows PCs, which even by the FTC’s standards surely doesn’t count as “concentration.” In this Khan was hardly alone: the company that is taking the biggest public relations hit is Microsoft, and how could they not:

Everyone around the world encountered these images everywhere, both in person and on social media:

This tweet was a joke, but from Microsoft’s position, apt: if prison is the restriction of freedom by the authorities, well, then that is exactly how this happened, as regulators restricted Microsoft’s long-sought freedom to lock down kernel space.

To be clear, restricting access to kernel space would not have made an issue like this impossible: after all, Microsoft, by definition, will always have access to kernel space, and they could very well issue an update that crashes not just 1% of the world’s Windows computers, but all of them. This, though, raises the question of incentives: is there any company both more motivated and better equipped than Microsoft to not make this sort of mistake, given the price they are paying today for a mistake that wasn’t even their fault?

Regulating Progress

Cloudflare CEO Matthew Prince already anticipated the potential solution I am driving at, and wrote a retort on X:

Here’s the scary thing that’s likely to happen based on the facts of the day if we don’t pay attention. Microsoft, who competes with @CrowdStrike, will argue that they should lock all third-party security vendors out of their OS. “It’s the only way we can be safe,” they’ll testify before Congress.

But lest we forget, Microsoft themselves had their own eternal screw up where they potentially let a foreign actor read every customer’s email because they failed to adequately secure their session signing keys. We still have no idea how bad the implications of #EternalBlue are.

So pick your poison. Today CrowdStrike messed up and some systems got locked out. That sucks a measurable amount. On the other hand, if Microsoft runs the app and security then they mess up and you’ll probably still be able to check your email — because their incentive is to fail open — but you’ll never know who else could too. Not to mention your docs, apps, files, and everything else.

Today sucked, but better security isn’t consolidated security. It isn’t your application provider picking who your security vendor must be. It’s open competition across many providers. Because CrowdStrike had a bad day, but the solution isn’t to standardize on Microsoft.

And, if we do, then when they have a bad day it’ll make today look like a walk in the park.

Prince’s argument is ultimately an updated version of that made by the McAfee CEO, and while I agree in theory, in this specific instance I disagree in practice: Windows gave kernel access because the company didn’t know any better, but just because the company won in its market doesn’t mean decisions made decades ago must then be the norm forever.

This is a mistake that I think that regulators make regularly, particularly in Europe. Last week I wrote in the context of the European Commission’s investigation of X and blue checkmarks:

One of the critiques of European economies is how difficult it is to fire people; while the first-order intentions are obviously understandable, the critique is that companies underinvest in growth because there is so much risk attached to hiring: if you get the wrong person, or if expected growth doesn’t materialize, you are stuck. What is notable is how Europe seems to have decided on the same approach to product development: Google is expected to have 10 blue links forever, Microsoft can’t include a browser or shift the center of gravity of its business to Teams, Apple can’t use user data for Apple Intelligence, and, in this case, X is forever bound to the European Commission’s interpretation of what a blue check meant under previous ownership. Everything, once successful, must be forever frozen in time; ultimately, though, the E.U. only governs a portion of Europe, and the only ones stuck in the rapidly receding past — for better or worse! — will be the E.U.’s own citizens.

In this case, people all over the world suffered because Microsoft was never allowed to implement a shift in security that it knew was necessary two decades ago.

More broadly, regulators need to understand that everything is a trade-off. Apple is under fire for its App Store policies — which I too have been relentlessly critical of — but as I wrote in The E.U. Goes Too Far earlier this month:

Apple didn’t just create the iPhone, they also created the App Store, which, after the malware and virus muddled mess of the 2000s, rebuilt user confidence and willingness to download 3rd-party apps. This was a massive boon to developers, and shouldn’t be forgotten; more broadly, the App Store specifically and Apple’s iOS security model generally really do address real threats that can not only hurt users but, by extension, chill the market for 3rd-party developers.

I went on to explain how Apple has gone too far with this model, particularly with its policy choices in the App Store that seem to be motivated more by protecting App Store revenue than security (and why the European Commission was right to go after anti-steering policies in particular), but I included the excerpted paragraph as a reminder that these are hard questions.

What does seem clear to me is that the way to answer hard questions is to not seek to freeze technology in time but rather to consider how many regulatory obsessions — including Windows dominance — are ultimately addressed by technology getting better, not by regulators treating mistaken assumptions (like operating system openness being an unalloyed good) as unchangeable grounds for competition.


The E.U. Goes Too Far

This Article is available as a video essay on YouTube


Apple’s increasingly fraught adventures with European regulators is a bit of a morality tale, which goes something like this:

The erstwhile underdog, arguably kept alive by its biggest rival to maintain a figment of competition in the PC market, rides unexpected success in music players to create the iPhone, the device that changed the world while making Apple one of the richest companies in history. Apple, though, seemingly unaware of its relative change in status and power, holds to its pattern — decried in that also-ran PC-era as the cause of their struggles — of total control, not just of the operating system its devices run, but also of 3rd-party applications that helped make the iPhone a behemoth. Ultimately, because of its refusal to compromise, regulators stepped in to force open the iPhone platform, endangering Apple’s core differentiation — the integration of hardware and software — along the way. If only Apple could have seen how the world — and its place in it! — had changed.

This morality tale is one I have chronicled, warned about, and perhaps even helped author over the last decade; like many morality tales, it is mostly true, but also like many morality tales, reality is a bit more nuanced, and filled with alternative morality tales that argue for different conclusions.

Europe’s Data Obsession

During the recent Stratechery break I was in Europe, and, as usual, was pretty annoyed by the terrible Internet experience endemic to the continent: every website has a bunch of regulatory-required pop-ups asking for permission to simply operate as normal websites, which means collecting the data necessary to provide whatever experience you are trying to access. This obviously isn’t a new complaint — I feel the same annoyance every time I visit.

What was different this time is that, for the first time in a while, I was traveling as a tourist with my family, and thus visiting things like museums, making restaurant reservations, etc.; what stood out to me was just how much information all of these entities wanted: seemingly every entity required me to make an account, share my mailing address, often my passport information, etc., just to buy a ticket or secure a table. It felt bizarrely old-fashioned, as if services like OpenTable or Resy didn’t exist, or even niceties like “Sign In With Google”; what exactly is a museum or individual restaurant going to do with so much of my personal information? I just want to see a famous painting or grab a meal!

Your first thought — and mine as well — might be that this is why all of those pop-ups exist: random entities are asking for a lot of my personal data, and I ought to have control of that. I certainly agree with the sentiment — if I lived in Europe and were assaulted with data requests from random entities with such frequency, I would feel similarly motivated — but obviously the implementation is completely broken: hardly anyone, no matter how disciplined about their data, has the time and motivation to read through every privacy policy or data declaration and jump through the hoops necessary to buy a ticket or reserve a table while figuring out the precise set of options necessary to do so without losing control of said data; you just hit “Accept” and buy the damn ticket.

My second thought — and this is certainly influenced by my more business-oriented trips to the continent — is that Europeans sure are obsessed with data generally. On another trip, I was in a forum about AI and was struck by the extent to which European business-people themselves were focused on data to the point where it seemed likely some number of their companies would miss out on potential productivity gains for fear of losing control of what they viewed as some sort of gold mine; the reality is that data is not the new oil: yes, it is valuable at scale and when processed in a data factory, but the entities capable of building such factories are on the scale of a Meta or a Google, not a museum or a restaurant or even a regional bank. I don’t think that AI has changed this equation: the goal of a business ought to be to leverage its data to deliver better business outcomes — which AI should make easier — not obsessively collect and hoard data as if it were a differentiator in its own right.

The third takeaway, though, is the most profound: the Internet experience in America is better because the market was allowed to work. Instead of a regulator mandating sites show pop-ups to provide some sort of false assurance about excess data collection, the vast majority of sites have long since figured out that (1) most of the data they might collect isn’t really that usable, particularly in light of the security risks in holding it, and (2) using third-party services is better both for the customer and themselves. Do you want a reservation? Just click a button or two, using the same service you use everywhere else; want to buy tickets? Just have a credit card, or even better, Apple Pay or Google Wallet.

Moreover, this realization extends to the data obsessives’ bugaboo, advertising. Yes, Internet advertising was a data disaster 15 years ago (i.e. the era where the European Internet seems stuck); the entities that did more than anyone to clean the situation up were in fact Meta and Google: sites and apps realized they could get better business results and reduce risk by essentially outsourcing all data collection and targeting to Meta and Google and completely cut out the ecosystem of data brokers and unaccountable middlemen that defined the first 15 years of the Internet.

Google Shopping

It would be fine — annoying, to be sure, but ultimately fine — if this were where the story ended: the E.U. identifies an issue (excessive data collection) and reaches for a regulatory solution, locking in a terrible status quo, while the U.S.’s more market-oriented approach results in a better experience for users and better business outcomes for businesses. The E.U. is gonna E.U., amirite? After all, this is the regulatory body that somehow thought a browser choice screen would fundamentally alter the balance of power in technology (and to the extent that that old Microsoft decree did, it was to aid Google in locking in Chrome’s dominance).

The first hints that there may be more nefarious motivations to E.U regulation, though, came in a 2017 European Commission decision about Google Shopping, which I wrote about in Ends, Means, and Antitrust. The title referenced the high level takeaway that while I understood the motivation and arguable necessity of regulating Aggregators (the ends), getting the details right mattered as well (the means), and I thought that case fell short for three reasons:

  • First, the Google Shopping decision assumed that Google ought never improve the user experience of Google search. Specifically, just because a search for a specific item once returned a shopping comparison site, it was hardly a crime that search improved such that it surfaced the specific item instead of a middleman. I argued that this was bad for users.
  • Second, the Google Shopping decision took an artificially narrow view of competition: those random shopping comparison sites that effectively arbitraged insufficiently precise search results in the 1990s were not the sort of competitors that actually delivered the market outcomes that regulation should try to enable; Google’s real competitor in shopping was Amazon and other similarly integrated retailers who were taking users away from search entirely.
  • Third, and most problematically, the Google Shopping product in question was actually an advertising unit: the European Commission was getting into the business of dictating how companies could or could not make money.

I wrote in that Article:

You can certainly argue that the tiny “Sponsored” label is bordering on dishonesty, but the fact remains that Google is being explicit about the fact that Google Shopping is a glorified ad unit. Does the European Commission honestly have a problem with that? The entire point of search advertising is to have the opportunity to put a result that might not otherwise rank in front of a user who has demonstrated intent.

The implications of saying this is monopolistic behavior goes to the very heart of Google’s business model: should Google not be allowed to sell advertising against search results for fear that it is ruining competition? Take travel sites: why shouldn’t Priceline sue Google for featuring ads for hotel booking sites above its own results? Why should Google be able to make any money at all?

This is the aspect of the European Commission’s decision that I have the biggest problem with. I agree that Google has a monopoly in search, but as the Commission itself notes that is not a crime; the reality of this ruling, though, is that making any money off that monopoly apparently is. And, by extension, those that blindly support this decision are agreeing that products that succeed by being better for users ought not be able to make money.

Again, there is plenty of room to disagree about what regulations are and are not appropriate, or debate what is the best way to spur competition; the reason I reacted so negatively to this decision, though, was because this specific point struck me as being fundamentally anti-free market: Google was obligated to deliver a search experience on the European Commission’s terms or else. It was a bit of a subtle point, to be sure — the stronger argument was about the validity of product evolution in a way that makes for a better user experience — but recent events suggest I was right to be concerned.

Apple and the Core Technology Fee

Every year June provides a flurry of antitrust news — I guess I am not the only one that goes on vacation in July — and this year was no exception. The most unsurprising was about Apple; from a European Commission press release:

Today, the European Commission has informed Apple of its preliminary view that its App Store rules are in breach of the Digital Markets Act (DMA), as they prevent app developers from freely steering consumers to alternative channels for offers and content. In addition, the Commission opened a new non-compliance procedure against Apple over concerns that its new contractual requirements for third-party app developers and app stores, including Apple’s new “Core Technology Fee”, fall short of ensuring effective compliance with Apple’s obligations under the DMA.

The first item, about Apple’s anti-steering provisions, fits perfectly into the morality tale I opened this Article with. Apple didn’t just create the iPhone, they also created the App Store, which, after the malware and virus muddled mess of the 2000s, rebuilt user confidence and willingness to download 3rd-party apps. This was a massive boon to developers, and shouldn’t be forgotten; more broadly, the App Store specifically and Apple’s iOS security model generally do address real threats that can not only hurt users but, by extension, chill the market for 3rd-party developers.

At the same time, the implications of the App Store model and iOS’s locked-down nature mean that Apple’s power over the app ecosystem is absolute; this not only means that the company can extract whatever fees it likes from developers, it also hinders the development of apps and especially business models that don’t slot in to Apple’s rules. I think it would have been prudent of the company to provide more of a release valve than web apps on Safari: I have long advocated that the company allow non-gaming apps to have webviews that provide alternative payment options of the developers’ choice; Apple instead went the other way, arguing ever more strenuously that developers can’t even talk about or link to their own websites if that website provided alternative payment methods, and to the extent the company gave ground, it was in the most begrudging and clunky way possible.

These anti-steering rules are the first part of the European Commission’s case, and while I might quibble with some of the particulars, I mostly blame Apple for not self-regulating in this regard. Note, though, that the anti-steering case isn’t just about allowing links or pricing information; this is the third charge:

Whilst Apple can receive a fee for facilitating via the AppStore [sic] the initial acquisition of a new customer by developers, the fees charged by Apple go beyond what is strictly necessary for such remuneration. For example, Apple charges developers a fee for every purchase of digital goods or services a user makes within seven days after a link-out from the app.

As I explained in an Update last month, the charges the European Commission seems to be referring to are Apple’s new Core Technology Fee for apps delivered outside of the App Store, a capability which is required by the DMA. Apple’s longstanding argument is that the fees it charges in the App Store are — beyond the 3% that goes to payment providers — compensation for the intellectual property leveraged by developers to make apps; if it can’t collect those fees via a commission on purchases then the company plans to charge developers €0.50 per app install per year.

Now this is where the discussion gets a bit messy and uncomfortable. On one hand, I think that Apple’s policies — and, more irritatingly, its rhetoric — come across as arrogant and unfairly dismissive of the massive contribution that 3rd-party developers have made to the iPhone in particular; there’s a reason why Apple’s early iPhone marketing emphasized that There’s an App for That:

At the same time, to quote an Update I wrote about the Core Technology Fee:

There simply is no question that iOS is Apple’s intellectual property, and if they wish to charge for it, they can: for it to be otherwise would require taking their property and basically nationalizing it (while, I assume, demanding they continue to take care of it). It is frustrating that Apple has insisted on driving us to this fundamental reality, but reality it is, and the DMA isn’t going to change that.

It sure seems that this is the exact scenario that the European Commission is headed towards: demanding that Apple make its intellectual property available to third party developers on an ongoing basis without charge; again, while I think that Apple should probably do that anyways, particularly for apps that eschew the App Store entirely, I am fundamentally opposed to compelling a company to provide its services for free.

Meta and Pay or Consent

An even better example of the European Commission’s apparent dismissal of private property rights is Meta; from the Financial Times:

The European Commission, the EU’s executive body, is exercising new powers granted by the Digital Markets Act — legislation aimed at improving consumer choice and opening up markets for European start-ups to flourish. The tech giants had to comply from March this year. In preliminary findings issued on Monday, Brussels regulators said they were worried about Meta’s “pay or consent” model. Facebook and Instagram users can currently opt to use the social networks for free while consenting to data collection, or pay not to have their data shared.

The regulators said that the choice presented by Meta’s model risks giving consumers a false alternative, with the financial barrier potentially forcing them to consent to their personal data being tracked for advertising purposes.

From the European Commission’s press release:

The Commission takes the preliminary view that Meta’s “pay or consent” advertising model is not compliant with the DMA as it does not meet the necessary requirements set out under Article 5(2). In particular, Meta’s model:

  1. Does not allow users to opt for a service that uses less of their personal data but is otherwise equivalent to the “personalised ads” based service.
  2. Does not allow users to exercise their right to freely consent to the combination of their personal data.
    To ensure compliance with the DMA, users who do not consent should still get access to an equivalent service which uses less of their personal data, in this case for the personalisation of advertising.

Here is the problem with this characterization: there is no universe where a non-personalized version of Meta’s products are “equivalent” to a personalized version from a business perspective. Personalized ads are both far more valuable to advertisers, who only want to advertise to potential customers, not the entire Meta user base, and also a better experience for users, who get more relevant ads instead of random nonsense that isn’t pertinent to them. Indeed, personalized ads are so valuable that Eric Seufert has estimated that charging a subscription in lieu of personalized ads would cost Meta 60% of its E.U. revenue; being forced to offer completely un-personalized ads would be far more injurious.

Clearly, though, the European Commission doesn’t care about Meta or its rights to offer its products on terms it chooses: demanding a specific business model that is far less profitable (and again, a worse user experience!) is once again a de facto nationalization (continentalization?) of private property. And, as a variation on an earlier point, while I don’t agree with the demonization of personalized ads, I do recognize the European Union’s prerogative and authority to insist that Meta offer an alternative; what is problematic here is seeking to ban the fairest alternative — direct payment by consumers — and thus effectively taking Meta’s property.

Nvidia and CUDA Integration

The final example is Nvidia; from Reuters:

Nvidia is set to be charged by the French antitrust regulator for allegedly anti-competitive practices, people with direct knowledge of the matter said, making it the first enforcer to act against the computer chip maker. The French so-called statement of objections or charge sheet would follow dawn raids in the graphics cards sector in September last year, which sources said targeted Nvidia. The raids were the result of a broader inquiry into cloud computing.

The world’s largest maker of chips used both for artificial intelligence and for computer graphics has seen demand for its chips jump following the release of the generative AI application ChatGPT, triggering regulatory scrutiny on both sides of the Atlantic. The French authority, which publishes some but not all its statements of objections to companies, and Nvidia declined comment. The company in a regulatory filing last year said regulators in the European Union, China and France had asked for information on its graphic cards. The European Commission is unlikely to expand its preliminary review for now, since the French authority is looking into Nvidia, other people with direct knowledge of the matter said.

The French watchdog in a report issued last Friday on competition in generative AI cited the risk of abuse by chip providers. It voiced concerns regarding the sector’s dependence on Nvidia’s CUDA chip programming software, the only system that is 100% compatible with the GPUs that have become essential for accelerated computing. It also cited unease about Nvidia’s recent investments in AI-focused cloud service providers such as CoreWeave. Companies risk fines of as much as 10% of their global annual turnover for breaching French antitrust rules, although they can also provide concessions to stave off penalties.

I have been writing for years about Nvidia’s integrated strategy, which entails spending huge amounts of resources on the freely available CUDA software ecosystem that only runs on Nvidia chips; it is an investment that is paying off in a major way today as CUDA is the standard for creating AI applications, which provides a major moat for Nvidia’s chips (which, I would note, is counter-intuitively less meaningful even as Nvidia has dramatically risen in value). The existence of this moat — and the correspondingly high prices that Nvidia can charge — is a feature, not a bug: the chipmaker spent years and years grinding away on GPU-accelerated computing (and were frequently punished by the market), and the fact the company is profiting so handsomely from an AI revolution it made possible is exactly the sort of reward anyone interested in innovation should be happy to see.

Once again, though, European regulators don’t seem to care about incentivizing innovation, and are walking down a path that seems likely to lead to de facto continentalization of private property: the logical penalty for Nvidia’s crime of investing in CUDA could very well be the forced separation of CUDA from Nvidia chips, which is to say simply taking away Nvidia’s property; the more “moderate” punishment could be ten percent of Nvidia’s worldwide revenue, despite the fact that France — and almost certainly the E.U. as a whole — provide nowhere close to ten percent of Nvidia’s revenue.

The Worldwide Regulator

That ten percent of worldwide revenue number may sound familiar: that is the same punishment allowed under the DMA, and it’s worth examining in its own right. Specifically, it’s bizarrely high: while Nvidia doesn’t break out revenue by geography, Meta has said that ten percent of its revenue comes from the E.U; for Apple it’s only 7 percent. In other words, the European Union is threatening to fine these U.S. tech giants more money than they make in the E.U. market in a year!

The first thing to note is that the very existence of these threats should be considered outrageous by the U.S. government: another international entity is threatening to not just regulate U.S. companies within their borders (reasonable!), but to actually take revenue earned elsewhere in the world. It is very disappointing that the current administration is not only not standing up for U.S. companies, but actually appears to view the European Commission as their ally.

Second, just as Apple seems to have started to believe its rhetoric about how developers need Apple, instead of acknowledging that it needs developers as well, the European Commission seems to have bought into its spin that it is the world’s tech regulator; the fact everyone encounters cookie permission banners is even evidence that is the case!

The truth is that the E.U.’s assumed power come from the same dynamics that make U.S. tech companies so dominant. The fundamental structure of technology is that it is astronomically expensive to both develop and maintain, but those costs are more akin to capital costs like building a ship or factory; the marginal cost of serving users is zero (i.e. it costs a lot of money to serve users in the aggregate, but every additional user is zero marginal cost). If follows, then, that there is almost always benefit in figuring out how to serve more users, even if those users come with lower revenue opportunities.

A useful analogy is to the pharmaceutical industry; Tyler Cowen wrote in a provocative post on Marginal Revolution entitled What is the gravest outright mistake out there?:

I am not referring to disagreements, I mean outright mistakes held by smart, intelligent people. Let me turn over the microphone to Ariel Pakes, who may someday win a Nobel Prize:

Our calculations indicate that currently proposed U.S. policies to reduce pharmaceutical prices, though particularly beneficial for low-income and elderly populations, could dramatically reduce firms’ investment in highly welfare-improving R&D. The U.S. subsidizes the worldwide pharmaceutical market. One reason is U.S. prices are higher than elsewhere.

That is from his new NBER working paper. That is supply-side progressivism at work, but shorn of the anti-corporate mood affiliation.

I do not believe we should cancel those who want to regulate down prices on pharmaceuticals, even though likely they will kill millions over time, at least to the extent they succeed. (Supply is elastic!) But if we can like them, tolerate them, indeed welcome them into the intellectual community, we should be nice to others as well. Because the faults of the others probably are less bad than those who wish to regulate down the prices of U.S. pharmaceuticals.

Cowen’s point is that while many countries aggressively regulate the price of pharmaceuticals, you ultimately need a profit motive to invest in the massive up-front cost in developing new drugs; that profit comes from the U.S. market. The reason this all works is that the actual production of drugs is similar to technology: once the drug is approved every marginal pill is effectively zero marginal cost; this means that it is worth selling drugs at highly regulated prices, but you still need a reason to develop new drugs as well.

So it is with technology; to take the Meta example above, the company may very well be brought to heel with regards to offering a non-personalized-ads business model: Facebook and Instagram and WhatsApp are already developed, operations teams already exist, data centers are already built, so maybe a fraction of potential revenue will be worth continuing to offer those services in the E.U. Moreover, there is the matter of network effects: should Meta leave the E.U. it would make its services worse for non-E.U. users by reducing the size of the network on its services.

This is the point, though: the E.U.’s worldwide regulatory power is ultimately derived from the structure of technology, and structures can change. This is where that ten-percent-of-worldwide-revenue figure looms large: it fundamentally changes the calculus in terms of costs. Fines from a regional regulator are not the equivalent of engineering and server costs that you’re already paying, so you might as well capture pennies of extra revenue from said region; they are directly related to revenue from that region. In other words, they are more like marginal costs: the marginal cost of serving the E.U. is the expected value of the chance you will get fined more than you earned in any given year, and for big tech that price is going up.

That’s not the only cost that is going up for Apple in particular: part of the implication of the “Core Technology Fee” model is that Apple has put forth a tremendous amount of engineering effort to accomodate its platform to the DMA specifically. Or, to put it another way, Apple has already forked iOS: there is one version for the E.U., and one version for the rest of the world. This too dramatically changes the calculus: yes, every E.U. user comes in at zero marginal cost, but not the E.U. as a whole: Apple isn’t just paying the expected value of future fines, but actual real costs in terms of engineering time and overall complexity.

In short, the E.U. either has or is about to cross a critical line in terms of overplaying its hand: yes, most of tech may have been annoyed by their regulations, but the economic value of having one code base for the entire world meant that everyone put up with it (including users outside of the E.U.); once that code base splits, though — as it recently did for Apple — the calculations of whether or not to even serve E.U. users becomes that much closer; dramatically increasing potential fines far beyond what the region is worth only exacerbates the issue.

E.U. Losses

I don’t, for the record, think that either Meta or Apple or any of the other big tech companies — with the exception of Nvidia above — are going to leave Europe. What will happen more often, though, are things like this; from Bloomberg:

Apple Inc. is withholding a raft of new technologies from hundreds of millions of consumers in the European Union, citing concerns posed by the bloc’s regulatory attempts to rein in Big Tech. The company announced Friday that it would block the release of Apple Intelligence, iPhone Mirroring and SharePlay Screen Sharing from users in the EU this year, because the Digital Markets Act allegedly forces it to downgrade the security of its products and services.

“We are concerned that the interoperability requirements of the DMA could force us to compromise the integrity of our products in ways that risk user privacy and data security,” Apple said in a statement.

The EU’s DMA forces dominant technology platforms to abide by a long list of do’s and don’ts. Tech services are prohibited from favoring their own offerings over those of rivals. They’re barred from combining personal data across their different services; blocked from using information they collect from third-party merchants to compete against them; and have to allow users to download apps from rival platforms. As part of the rules, the EU has designated six of the biggest tech companies as “gatekeepers” — powerful platforms that require stricter scrutiny. In addition to Apple, that list includes Microsoft Corp., Google parent Alphabet Inc. and Facebook owner Meta Platforms Inc.

I explained in this Daily Update why this move from Apple was entirely rational, including specific provisions in the DMA that seem to prohibit the features Apple is withholding, in some cases due to interoperability requirements (does iPhone screen-sharing have to work with Windows?), and in others due to restrictions on data sharing (the DMA forbids sharing any user data held by the core platform with a new service built by the core platform); in truth, though, I don’t think Apple needs a good reason: given the calculations above it seems foolish to make any additional investments in E.U.-specific code bases.

European Commission Vice-President Margrethe Vestager was not happy, saying at Forum Europa:

I find that very interesting that they say we will now deploy AI where we’re not obliged to enable competition. I think that is that is the most sort of stunning open declaration that they know 100% that this is another way of disabling competition where they have a stronghold already.

This makes absolutely no sense: Apple not deploying AI in fact opens them up to competition, because their phones will be less fully featured than they might be otherwise! Vestager should be taking a victory lap, if her position made a shred of sense. In fact, though, she is playing the thieving fool like other Europeans before her; from 2014’s Economic Power in the Age of Abundance:

At first, or second, or even third glance, it’s hard to not shake your head at European publishers’ dysfunctional relationship with Google. Just this week a group of German publishers started legal action against the search giant, demanding 11 percent of all revenue stemming from pages that include listings from their sites. From Danny Sullivan’s excellent Search Engine Land:

German news publishers are picking up where the Belgians left off, a now not-so-proud tradition of suing Google for being included in its listings rather than choosing to opt-out. This time, the publishers want an 11% cut of Google’s revenue related to them being listed.

As Sullivan notes, Google offers clear guidelines for publisher’s who do not want to be listed, or simply do not want content cached. The problem, though, as a group of Belgian newspapers found out, is that not being in Google means a dramatic drop in traffic:

Back in 2006, Belgian news publishers sued Google over their inclusion in the Google News, demanding that Google remove them. They never had to sue; there were mechanisms in place where they could opt-out.

After winning the initial suit, Google dropped them as demanded. Then the publications, watching their traffic drop dramatically, scrambled to get back in. When they returned, they made use of the exact opt-out mechanisms (mainly just to block page caching) that were in place before their suit, which they could have used at any time.

In the case of the Belgian publishers in particular, it was difficult to understand what they were trying to accomplish. After all, isn’t the goal more page views (it certainly was in the end!)? The German publishers in this case are being a little more creative: like the Belgians before them they are alleging that Google benefits from their content, but instead of risking their traffic by leaving Google, they’re instead demanding Google give them a cut of the revenue they feel they deserve.

Vestager’s comments about Apple Intelligence are foolish like the Belgian publishers: she is mad that Apple isn’t giving her what she claims to want, which is a weaker Apple more susceptible to competition; expect this to be the case for lots of new features going forward. More importantly, expect this to be the case for lots of new companies: Apple and Meta will probably stay in the E.U. because they’re already there; it seems increasingly foolish for newer companies to ever even bother entering. That, more than anything, is why Apple and Meta and the other big tech companies won’t face competition even as they are forced to weaken their product offerings.

It is the cases above, though, that are thieving like the German publishers: it is one thing to regulate a market; it is another to straight up take a product or service on your terms, enabled by a company’s loyalty to its existing userbase. I might disagree with a lot of E.U. regulations, but I respect them and their right to make them; dictating business models or forcing a company to provide services for free, though, crosses the line from regulation to theft.

The E.U. Morality Tale

And so we arrive at another morality tale, this time about the E.U. Thierry Breten, the European Commissioner for Internal Market, tweeted late last year after the E.U. passed the AI Act:

Here’s the problem with leading the world in regulation: you can only regulate what is built, and the E.U. doesn’t build anything pertinent to technology.1 It’s easy enough to imagine this tale being told in a few years’ time:

The erstwhile center of civilization, long-since surpassed by the United States, leverages the fundamental nature of technology to regulate the Internet for the entire world. The powers that be, though, seemingly unaware that their power rested on the zero marginal cost nature of serving their citizens, made such extreme demands on U.S. tech companies that they artificially raised the cost of serving their region beyond the expected payoff. While existing services remained in the region due to loyalty to their existing customers, the region received fewer new features and new companies never bothered to enter, raising the question: if a regulation is passed but no entity exists that is covered by the regulation, does the regulation even exist? If only the E.U. could have seen how the world — and its place in it! — had changed.

Or, to use Breton’s description, a launchpad without a rocket is just a burned out piece of concrete.

I wrote a follow-up to this Article in this Daily Update.



  1. French startup Mistral is building compelling large language models; the Nvidia action in particular could be fatal to their future 

Summer Break: Week of July 1st

Stratechery is on summer break the week of July 1. There will be no Weekly Article or Updates. The next Update will be on Monday, July 8.

In addition, the next episode of Dithering will be on Tuesday, July 9 and the next episode of Sharp Tech will be on Thursday, July 11. Sharp China will also return the week of July 8.

The full Stratechery posting schedule is here.

Apple Intelligence is Right On Time

This Article is available as a video essay on YouTube


Apple’s annual Worldwide Developer Conference keynote kicks off in a few hours, and Mark Gurman has extensive details of what will be announced in Bloomberg, including the name: “Apple Intelligence”. As John Gruber noted on Daring Fireball:

His report reads as though he’s gotten the notes from someone who’s already watched Monday’s keynote. I sort of think that’s what happened, given how much of this no one had reported before today. Bloomberg’s headline even boldly asserts “Here’s Everything Apple Plans to Show at Its AI-Focused WWDC Event”. I’m only aware of one feature for one platform that isn’t in his report, but it’s not a jaw-dropper, so I wouldn’t be surprised if it was simply beneath his threshold for newsworthiness. Look, I’m in the Apple media racket, so I know my inside-baseball obsessions are unusual, but despite all the intriguing nuggets Gurman drops in this piece, the thing I’m most insatiably curious about is how he got all this. Who spilled? By what means? It’s extraordinary. And don’t think for a second it’s a deliberate leak. Folks inside Apple are, I assure you, furious about this, and incredulous that one of their own colleagues would leak it to Gurman.

The irony of the leak being so huge is that nothing is particularly surprising: Apple is announcing and incorporating generative AI features throughout its operating systems and making them available to developers. Finally, the commentariat exclaims! Apple is in danger of falling dangerously behind! The fact they are partnering with OpenAI is evidence of how desperate they are! In fact, I would argue the opposite: Apple is not too late, they are taking the correct approach up and down the stack, and are well-positioned to be one of AI’s big winners.

Apple’s Business Model

Start with the most basic analysis of Apple’s business: despite all of the (legitimate) talk about Services revenue, Apple remains a hardware company at its core. From its inception the company has sold personal computers: the primary evolution has been that the devices have become ever more personal, from desktops to laptops to phones, even as the market as a whole has shifted from being enterprise-centric to consumer-centric, which plays to Apple’s strengths in design and the user experience benefits that come from integration.

Here’s the thing about an AI-mediated future: we will need devices! Take the classic example of the Spike Jonze movie “Her”:

A scene from "Her" showing the protagonist wearing an earpiece to access AI

Jonze’s depiction of hardware is completely unrealistic: there is not a single battery charger in the entire movie (the protagonist removes the device to sleep, and simply places it on his bedside table), or any consideration given to connectivity and the constraints that might put on the size and capability of the device in the protagonist’s ear; and yet, even then, there is a device in the protagonist’s ear, and, when the protagonist wants the AI to be able to see the outside world, he puts an iPhone-esque camera device in his pocket:

A scene from "Her" showing the protagonist with a smartphone-like camera in his pocket

Now a Hollywood movie from 2013 is hardly dispositive about the future, but the laws of physics are; in this case the suspension of disbelief necessary to imagine a future of smarter-than-human AIs must grant that we need some sort of device for a long time to come, and as long as that is the case there is an obvious opportunity for the preeminent device maker of all time. Moreover, to the extent there is progress to be made in miniaturization, power management, and connectivity, it seems reasonable to assume that Apple will be on the forefront of bringing those advancements to market, and will be courageous enough to do so.

In other words, any analysis of Apple’s prospects in an AI world should start with the assumption that AI is a complement to Apple’s business, not disruptive. That doesn’t mean that Apple is guaranteed to succeed, of course: AI is the only foreseeable technological advancement that could provide sufficient differentiation to actually drive switching, but even there, the number of potential competitors is limited — there may only be one (more on this in a moment).

In the meantime, AI makes high-performance hardware more relevant, not less; Gurman notes that “Apple Intelligence” will only be available on Apple’s latest devices:

The new capabilities will be opt-in, meaning Apple won’t make users adopt them if they don’t want to. The company will also position them as a beta version. The processing requirements of AI will mean that users need an iPhone 15 Pro or one of the models coming out this year. If they’re using iPads or Macs, they’ll need models with an M1 chip at least.

I’m actually surprised at the M1 baseline (I thought it would be M2), but the iPhone 15 Pro limitation is probably the more meaningful one from a financial perspective, and speaks to the importance of RAM (the iPhone 15 Pro was the first iPhone to ship with 8GB of RAM, which is also the baseline for the M1). In short, this isn’t a case of Apple driving arbitrary differentiation; you really do need better hardware to run AI, which means there is the possibility of a meaningful upgrade cycle for the iPhone in particular (and higher ARPUs as well — might Apple actually start advertising RAM differences in iPhone models, since more RAM will always be better?).

The App Store and AI

One of the ironies of that phone-like device in Her being a camera is that such a device will probably not be how an AI “sees”; Humane has already shipped a camera-based device that simply clips on to your clothing, but the most compelling device to date is Meta’s Ray-Ban smart glasses:

Meta's Ray-Ban smartglasses

Meta certainly has designs on AR glasses replacing your phone; shortly after acquiring Oculus, CEO Mark Zuckerberg predicted that devices mounted on your head would replace smartphones for most interactions in 10 years time. That prediction, though, was nine years ago; no one today, including Meta, predicts that smartphones will not be the most essential computing device in 2025, even though the Ray-Ban glasses are interesting.

The fact of the matter is that smartphones are nearly perfect devices for the jobs we ask them to do: they are small enough to be portable, and yet large enough to have a screen to interact with, sufficient battery life to make it through the day, and good enough connectivity; the smartphone, alongside cloud computing, represents the end of the beginning, i.e. the platform on which the future happens, as opposed to a transitory phase to a new class of disruptive devices.

In this view the app ecosystem isn’t so much a matter of lock-in as it is a natural state of affairs: of course the app interfaces to the physical world, from smart homes to transportation to media consumption, are located on the device that is with people everywhere. And, by extension, of course those devices are controlled by an oligopoly: the network effects of platforms are unrivaled; indeed, the real surprise of mobile — at least if you asked anyone in 2013, when Stratechery started — is that there are two platforms, instead of just one.

That, by extension, is why the Ray-Ban glasses come with an app, and thus have a chance of succeeding; one of Humane’s fatal flaws was their insistence that they could stand alone. Moreover, the longer that the smartphone is a prerequisite for new experiences, the more likely it is to endure; there is an analogy here to the continued relevance of music labels, which depend on the importance of back catalogs, which just so happen to expand with every release of new music. Every new experience that is built with the assumption of a smartphone extends the smartphone’s relevance that much further into the future.

There is, to be fair, a scenario where AI makes all applications obsolete with one fell swoop, but for now AI fits in the smartphone-enhancing pattern. First, to the extent that AI can be done locally, it will depend on the performance and battery life of something that is smartphone-sized at a minimum. Second, to the extent that AI is done in the cloud, it will depend on the connectivity and again battery life of something that is smartphone-sized as well. The latter, meanwhile, will come with usage costs, which is a potential tailwind for Apple’s (and Google’s) App Stores: those usage costs will be paid via credits or subscriptions which both platforms will mandate go through their in-app purchase systems, of which they will take a cut.

The third alternative is that most AI utilization happens via platform-provided APIs, which is exactly what Apple is expected to announce later today. From Gurman’s report:

Siri will be a part of the new AI push as well, with Apple planning a revamp to its voice-control service based on large language models — a core technology behind generative AI. For the first time, Siri users will be able to have precise control over individual features and actions within apps. For instance, people will be able to tell Siri to delete an email, edit a photo or summarize a news article. Over time, Apple will expand this to third-party apps and allow users to string multiple commands together into a single request. These features are unlikely to arrive until next year, however.

Platform-provided AI capabilities will not only be the easiest way for developers to incorporate these features, they will also likely be the best way, at least in terms of the overall user experience. Users will understand how to use them, because they will be “trained” by Apple’s own apps; they will likely be cheaper and more efficient, because they are leveraging Apple’s overall investment in capabilities; most importantly, at least in terms of Apple’s competitive position, they will further lock-in the underlying platform, increasing the hurdle for any alternative.

AI Infrastructure

There are two infrastructure concerns when it comes to the current state of AI. The first, and easiest to manage for Apple (at least in the short term), are so-called chatbots. On one hand, Apple is massively “behind” in terms of both building a ChatGPT-level chatbot, and also in terms of building out the necessary infrastructure to support that level of capability for its massive userbase. The reason I put “behind” in scare-quotes, though, is that Apple can easily solve its shortcoming in this area by partnering with a chatbot that already exists, which is exactly what they are doing. Again from Gurman:

The company’s new AI system will be called Apple Intelligence, and it will come to new versions of the iPhone, iPad and Mac operating systems, according to people familiar with the plans. There also will be a partnership with OpenAI that powers a ChatGPT-like chatbot.

The analogy here is to Search, another service that requires astronomical investments in both technology and infrastructure; Apple has never built and will never need to build a competitive search engine, because it owns the devices on which search happens, and thus can charge Google for the privilege of making the best search engine the default on Apple devices. This is the advantage of owning the device layer, and it is such an advantageous position that Apple can derive billions of dollars of profit at essentially zero cost.

A similar type of partnership with OpenAI will probably not be as profitable as search was; my guess is that Apple will be paying OpenAI, instead of the other way around, [UPDATE: I know longer believe this, and explain why in this post-WWDC Update] but the most important takeaway in terms of Apple’s competitive position is that they will, once again, have what is regarded as the best chatbot on their devices without having to make astronomical investments in technology and infrastructure. Moreover, this dampens the threat of OpenAI building their own device that usurps the iPhone: why would you want to buy a device that lacks the iPhone ecosystem when you can get the same level of capability on the iPhone you already have, along with all of the other aspects of the iPhone platform I noted above?

The second infrastructure concern is those API-level AI capabilities that Apple is set to extend to 3rd-party developers. Here the story is a bit fuzzier; from another Gurman report last month:

Apple Inc. will deliver some of its upcoming artificial intelligence features this year via data centers equipped with its own in-house processors, part of a sweeping effort to infuse its devices with AI capabilities. The company is placing high-end chips — similar to ones it designed for the Mac — in cloud-computing servers designed to process the most advanced AI tasks coming to Apple devices, according to people familiar with the matter. Simpler AI-related features will be processed directly on iPhones, iPads and Macs, said the people, who asked not to be identified because the plan is still under wraps.

I am intrigued to learn more about how these data centers are architected. Apple’s chips are engineered first-and-foremost for smartphones, and extended to Macs; that means they incorporate a CPU, GPU, NPU and memory into a single package. This has obvious benefits in terms of the iPhone, but there are limitations in terms of the Mac; for example, the highest end Mac Pro only has 192 GB of memory, a significant step-down from the company’s Intel Xeon-based Mac Pros, which topped out at 1.5 TB of memory. Similarly, while that top-of-the-line M2 Ultra has a 72-core GPU, it is married to a 24-core CPU; a system designed for AI processing would want far greater GPU capability without paying a “CPU tax” along the way.

In short, I don’t currently understand why Apple would build datacenters around its own chips, instead of using chips better-suited to the tasks being asked of them. Perhaps the company will announce that it has designed a new server chip, or perhaps its chips are being used in conjunction with purpose-built chips from other companies; regardless, building out the infrastructure for API-level AI features is one of the biggest challenges Apple faces, but it is a challenge that is eminently solvable, particularly since Apple controls the interface through which those capabilities will be leveraged — and when. To go back to the first Gurman article referenced above:

Apple’s AI features will be powered by its own technology and tools from OpenAI. The services will either rely on on-device processing or cloud-based computing, depending on the sophistication of the task at hand. The new operating systems will include an algorithm to determine which approach should be taken for any particular task.

Once again, we see how Apple (along with Google/Android and Microsoft/Windows) is located at the point of maximum leverage in terms of incorporating AI into consumer-facing applications: figuring out what AI applications should be run where and when is going to be a very difficult problem as long as AI performance is not “good enough”, which is likely to be the case for the foreseeable future; that means that the entity that can integrate on-device and cloud processing is going to be the best positioned to provide a platform for future applications, which is to say that the current operating system providers are the best-placed to be the platforms of the future, not just today.

Competitive Threats

Outlining Apple’s competitive position illustrates what a threat to their business must look like. In the very long run, it is certainly possible that there is an AGI that obsoletes the smartphone entirely, just as the iPhone obsoleted entire categories of consumer electronics. Yes, we will still need devices, which works in Apple’s favor, but if those devices do not depend on an app ecosystem then Apple runs the risk of being reduced to a commoditized hardware manufacturer. This, by extension, is the biggest reason to question Apple’s decision to partner with OpenAI for chatbot functionality instead of building out their own capability.

I’m skeptical, though, that this sort of wholesale transition will happen anytime soon, or ever; the reality of technology is that most new epochs layer on top of what came before, as opposed to replacing it wholesale. The Internet, for example, has been largely experienced on top of existing operating systems like Windows or iOS. Again, the most fervent AI believers may argue that I am dismissing AI’s long-term capabilities, but I think that Apple is making a reasonable bet.

It follows, then, that if I am right about the continued importance of the smartphone, that the only entity that can truly threaten Apple is Google, precisely because they have a smartphone platform and attendant ecosystem. The theory here is that Google could develop truly differentiated AI capabilities that make Android highly differentiated from the iPhone, even as Android has all of the apps and capabilities that are the price of entry to a user’s pocket in the first place.

I don’t, for the record, think that this possibility is purely theoretical; I wrote last December about Google’s True Moonshot:

What, though, if the mission statement were the moonshot all along? What if “I’m Feeling Lucky” were not a whimsical button on a spartan home page, but the default way of interacting with all of the world’s information? What if an AI Assistant were so good, and so natural, that anyone with seamless access to it simply used it all the time, without thought?

That, needless to say, is probably the only thing that truly scares Apple. Yes, Android has its advantages to iOS, but they aren’t particularly meaningful to most people, and even for those that care — like me — they are not large enough to give up on iOS’s overall superior user experience. The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.

I wrote more about this possibility two weeks ago, so I don’t want to belabor the point, but this may be the biggest reason why Apple is partnering with OpenAI, and not Google: Apple might not want to build a dependency on a company might be incentivized to degrade their relative experience (a la Google Maps a decade ago), and Google might not want to give access to its potential source of long-term differentiation to the company whose business model is the clearest solution to the search company’s threat of disruption.

The disruptive potential of AI for Google is straightforward: yes, Google has massive infrastructure advantages and years of research undergirding its AI efforts, but delivering an answer instead of a set of choices is problematic both for Google’s business model, which depends on users’ choosing the winner of an auction, and for its position as an Aggregator, which depends on serving everyone in the world, regardless of their culture and beliefs.

The past few weeks have surfaced a third risk as well: Google has aggressively pushed AI results into search in response to the competitive threat from chatbots; OpenAI and Perplexity, though, aren’t upsetting user expectations when they delivery hallucinatory responses, because users already know what they are getting into when they choose to use chatbots to ask questions. Google, though, has a reputation for delivering “correct” results, which means leveraging its search distribution advantage to push AI entails significant risk to that reputation. Indeed, Google has already started to deprioritize AI results in search, moving them further down the page; that, though, at least in my personal experience, has made them significantly less useful and pushed me back towards using chatbots.

A meaningful strategic shift towards a vertical model centered around highly differentiated devices, though, solves a lot of these problems: the devices would make money in their own right (and could be high-priced because they are the best way to access Google’s differentiated AI experiences), could deliver a superior AI experience (not just via the phone, but accessories like integrated glasses, ear buds, etc), and would serve an audience that has self-selected into the experience. I remain dubious that Google will have the gumption to fully go in this direction, but it is the one possibility that should make Apple nervous.

AI Prudence

It is the other operating system provider, Microsoft, who gives further credence to Apple’s deliberative approach. Windows is not a threat to the iPhone for all of the app ecosystem reasons noted above, but Microsoft clearly sees an opportunity to leverage AI to compete with the Mac. After last month’s CoPilot+ PC event I wrote in Windows Returns:

The end result — assuming that reviewed performance measures up to Microsoft’s claims — is an array of hardware from both Microsoft and its OEM partners that is MacBook Air-esque, but, unlike Apple’s offering, actually meaningfully integrated with AI in a way that not only seems useful today, but also creates the foundation to be dramatically more useful as developers leverage Microsoft’s AI capabilities going forward. I’m not going to switch (yet), but it’s the first time I’ve been tempted; at a minimum the company set a bar for Apple to clear at next month’s WWDC.

One of the new Windows features that Microsoft touted at that event was Recall, which leverages AI to help users access everything they have seen or done on their computer in recent history. The implementation, though, turned out to be quite crude: Windows will regularly take screenshots and use local processing to index everything so that it is easily searchable. The problem is that while Microsoft stridently assured customers (and analysts!) that none of your information would be sent to the cloud, they didn’t take any measures to ensure that said data was secured locally, instead taking a dependency on Windows’ overall security. Over the intervening weeks security researchers have demonstrated why that wasn’t good enough, leading to a Microsoft announcement last week of several significant changes; from The Verge:

Microsoft says it’s making its new Recall feature in Windows 11 that screenshots everything you do on your PC an opt-in feature…Microsoft will also require Windows Hello to enable Recall, so you’ll either authenticate with your face, fingerprint, or using a PIN…This authentication will also apply to the data protection around the snapshots that Recall creates.

There are a few interesting implications in these changes:

  • First, by making Recall opt-in, Microsoft is losing the opportunity to provide users with a surprise-and-delight moment when their computer finds what they were looking for; Microsoft is going to need to sell the feature to even make that experience possible.
  • Second, while requiring OS-level user authentication to access Recall data is almost certainly the right choice, it’s worth pointing out that this removes the potential for 3rd-party developers to build innovative new applications on top of Recall data.

These two factors explain how this screw-up happened: Microsoft wanted to push AI as a differentiator, but the company is still at its core a developer-focused platform provider. What they announced initially solved for both, but the expectations around user data and security are such that the only entity that has sufficient trust to deliver these sorts of intimate experiences is the OS provider itself.

This is good news for Apple in two respects. First, with regards to the title of this Article, the fact it is possible to be too early with AI features, as Microsoft seemed to be in this case, implies that not having AI features does not mean you are too late. Yes, AI features could differentiate an existing platform, but they could also diminish it. Second, Apple’s orientation towards prioritizing users over developers aligns nicely with its brand promise of privacy and security: Apple would prefer to deliver new features in an integrated fashion as a matter of course; making AI not just compelling but societally acceptable may require exactly that, which means that Apple is arriving on the AI scene just in time.

I wrote a follow-up to this Article in this Daily Update.


AI Integration and Modularization

This Article is available as a video essay on YouTube


Satya Nadella, in last week’s Stratechery Interview, said in response to a question about Google and AI:

I look at it and say, look, I think there’s room always for somebody to vertically integrate. I always go back, there’s what is the Gates/Grove model, and then let’s call it the Apple or maybe the new Google model, which is the vertical integration model. I think both of them have plays.

One of the earliest economists to explore the question of integration versus modularization was Ronald Coase in his seminal paper The Nature of the Firm; Coase concluded:

When we are considering how large a firm will be the principle of marginalism works smoothly. The question always is, will it pay to bring an extra exchange transaction under the organising authority? At the margin, the costs of organising within the firm will be equal either to the costs of organising in another firm or to the costs involved in leaving the transaction to be “organised” by the price mechanism.

It was Professor Clayton Christensen who extended the analysis of integration versus modularization beyond the economists’ domain of measurable costs to the more ineffable realm of innovation. From The Innovator’s Solution:

The improvement of integrated versus modular systems over time according to Professor Christensen

The left side of figure 5-1 indicates that when there is a performance gap — when product functionality and reliability are not yet good enough to address the needs of customers in a given tier of the market — companies must compete by making the best possible products. In the race to do this, firms that build their products around proprietary, interdependent architectures enjoy an important competitive advantage against competitors whose product architectures are modular, because the standardization inherent in modularity takes too many degrees of design freedom away from engineers, and they cannot not optimize performance.

To close the performance gap with each new product generation, competitive forces compel engineers to fit the pieces of their systems together in ever-more-efficient ways in order to wring the most performance possible out of the technology that is available. When firms must compete by making the best possible products, they cannot simply assemble standardized components, because from an engineering point of view, standardization of interfaces (meaning fewer degrees of design freedom) would force them to hack away from the frontier of what is technologically possible. When the product is not good enough, backing off from the best that can be done means that you’ll fall behind.

Companies that compete with proprietary, interdependent architectures must be integrated: They must control the design and manufacture of every critical component of the system in order to make any piece of the system. As an illustration, during the early days of the mainframe computer industry, when functionality and reliability were not yet good enough to satisfy the needs of mainstream customers, you could not have existed as an independent contract manufacturer of mainframe computers because the way the machines were designed depended on the art that would be used in manufacturing, and vice versa. There was no clean interface between design and manufacturing. Similarly, you could not have existed as an independent supplier of operating systems, core memory, or logic circuitry to the mainframe industry because these key subsystems had to be interdependently and iteratively designed, too.

I made my own contribution to this literature in 2013’s What Clayton Christensen Got Wrong. My dispute wasn’t with the above excerpt, but rather the follow-on argument that integrated solutions would eventually overshoot customers and be disrupted by modular alternatives; it was on this basis that Christensen regularly predicted that Apple would lose its lead in smartphones, but I didn’t think that would happen in a consumer market where there were costs to modularization beyond those measured by economists:

The issue I have with this analysis of vertical integration — and this is exactly what I was taught at business school — is that the only considered costs are financial. But there are other, more difficult to quantify costs. Modularization incurs costs in the design and experience of using products that cannot be overcome, yet cannot be measured. Business buyers — and the analysts who study them — simply ignore them, but consumers don’t. Some consumers inherently know and value quality, look-and-feel, and attention to detail, and are willing to pay a premium that far exceeds the financial costs of being vertically integrated.

This ended up being correct as far as smartphones are concerned, and even computers: yes, Windows-based modular computers dominated the first 30 years of computing, but today the Mac is dominant amongst consumers, something Microsoft implicitly admitted in their framing of Copilot+ PCs. Both smartphones and PCs, though, are physical devices you hold in your hands; does the assumption that integration wins in the beginning — and sometimes even the end — hold in AI?

Integrated Versus Modular AI

The integrated versus modular dichotomy in PCs looked like this:

Integrated versus modular in PCs

Apple briefly experimented with modularization in the 1990s, and it nearly bankrupted them; eventually the company went in the opposite direction and integrated all the way down to the processor, following the path set by the iPhone:

Integrated versus modular in smartphones

The similarities between these two images should be striking; Mark Zuckerberg is counting on the same pattern repeating itself for headset computers, with Meta as the open alternative. When it comes to AI, though, Google is, as Nadella noted, the integrated player:

Google's integrated AI stack

Google trains and runs its Gemini family of models on its own TPU processors, which are only available on Google’s cloud infrastructure. Developers can access Gemini through Vertex AI, Google’s fully-managed AI development platform; and, to the extent Vertex AI is similar to Google’s internal development environment, that is the platform on which Google is building its own consumer-facing AI apps. It’s all Google, from top-to-bottom, and there is evidence that this integration is paying off: Gemini 1.5’s industry leading 2 million token context window almost certainly required joint innovation between Google’s infrastructure team and its model-building team.

On the other extreme is AWS, which doesn’t have any of its own models which, while it has its own Titan family of models, appears to be primarily focused on its Bedrock managed development platform, which lets you use any model. Amazon’s other focus has been on developing its own chips, although the vast majority a good portion of its AI business runs on Nvidia GPUs.

AWS's modular AI stack

Microsoft is in the middle, thanks to its close ties to OpenAI and its models. The company added Azure Models-as-a-Service last year, but its primary focus for both external customers and its own internal apps has been building on top of OpenAI’s GPT family of models; Microsoft has also launched its own chip for inference, but the vast majority of its workloads run on Nvidia.

Microsoft's somewhat integrated AI stack

Finally there is Meta, which only builds for itself; that means the most important point of integration is between the apps and the model; that’s why Llama 3, for example, was optimized for low inference costs, even at the expense of higher training costs. This also means that Meta can skip the managed service layer completely.

Meta's mostly integrated AI stack

One other company to highlight is Databricks (whose CEO I spoke to earlier this month). Databricks, thanks to its acquisition of MosaicML, helps customers train their own LLMs on their own data, which is, of course, housed on Databricks, which itself sits on top of the hyperscalers:

Databrick's customized model AI stack

Databricks is worth highlighting because of the primacy its approach places on data; data and model are the points of integration.

Big Tech Implications

Google

The first takeaway from this analysis is that Google’s strategy truly is unique: they are, as Nadella noted, the Apple of AI. The bigger question is if this matters: as I noted above, integration has proven to be a sustainable differentiation in (1) the consumer market, where the buyer is the user, and thus values the user experience benefits that come from integration, and when (2) those user experience benefits are manifested in devices.

Google is certainly building products for the consumer market, but those products are not devices; they are Internet services. And, as you might have noticed, the historical discussion didn’t really mention the Internet. Both Google and Meta, the two biggest winners of the Internet epoch, built their services on commodity hardware. Granted, those services scaled thanks to the deep infrastructure work undertaken by both companies, but even there Google’s more customized approach has been at least rivaled by Meta’s more open approach. What is notable is that both companies are integrating their models and their apps, as is OpenAI with ChatGPT.

The second question for Google is if they are even good at making products anymore; part of what makes Apple so remarkable is not only that the company is integrated, but also that it maintained its standard for excellence for so long even as it continued to release groundbreaking new products beyond the iPhone, like the Apple Watch and AirPods. It may be the case that selling hardware, which has to be perfect every year to justify a significant outlay of money by consumers, provides a much better incentive structure for maintaining excellence and execution than does being an Aggregator that users access for free.

What this analysis also highlights is the potential for Google’s True Moonshot: actually putting weight behind the company’s Pixel phones as a vertically-integrated iPhone rival. From that Article:

Google’s collection of moonshots — from Waymo to Google Fiber to Nest to Project Wing to Verily to Project Loon (and the list goes on) — have mostly been science projects that have, for the most part, served to divert profits from Google Search away from shareholders. Waymo is probably the most interesting, but even if it succeeds, it is ultimately a car service rather far afield from Google’s mission statement “to organize the world’s information and make it universally accessible and useful.”

What, though, if the mission statement were the moonshot all along? What if “I’m Feeling Lucky” were not a whimsical button on a spartan home page, but the default way of interacting with all of the world’s information? What if an AI Assistant were so good, and so natural, that anyone with seamless access to it simply used it all the time, without thought?

That, needless to say, is probably the only thing that truly scares Apple. Yes, Android has its advantages to iOS, but they aren’t particularly meaningful to most people, and even for those that care — like me — they are not large enough to give up on iOS’s overall superior user experience. The only thing that drives meaningful shifts in platform marketshare are paradigm shifts, and while I doubt the v1 version of Pixie [Google’s rumored Pixel-only AI assistant] would be good enough to drive switching from iPhone users, there is at least a path to where it does exactly that.

Of course Pixel would need to win in the Android space first, and that would mean massively more investment by Google in go-to-market activities in particular, from opening stores to subsidizing carriers to ramping up production capacity. It would not be cheap, which is why it’s no surprise that Google hasn’t truly invested to make Pixel a meaningful player in the smartphone space.

The potential payoff, though, is astronomical: a world with Pixie everywhere means a world where Google makes real money from selling hardware, in addition to services for enterprises and schools, and cloud services that leverage Google’s infrastructure to provide the same capabilities to businesses. Moreover, it’s a world where Google is truly integrated: the company already makes the chips, in both its phones and its data centers, it makes the models, and it does it all with the largest collection of data in the world.

As I noted in an Update last month, Google’s recent reorg points in this direction, although Google I/O didn’t provide any hints that this shift in strategy might be coming; instead, the big focus was a new AI-driven search experience, which, needless to say, has seen mixed results. Indeed, the fact that Google is being mocked mercilessly for messed-up AI answers gets at why consumer-facing AI may be disruptive for the company: the reason why incumbents find it hard to respond to disruptive technologies is because they are, at least at the beginning, not good enough for the incumbent’s core offering. Time will tell if this gives more fuel to a shift in smartphone strategies, or makes the company more reticent.

The enterprise space is a different question: while I was very impressed with Google’s enterprise pitch, which benefits from its integration with Google’s infrastructure without all of the overhead of potentially disrupting the company’s existing products, it’s going to be a heavy lift to overcome data gravity, i.e. the fact that many enterprise customers will simply find it easier to use AI services on the same clouds where they already store their data (Google does, of course, also support non-Gemini models and Nvidia GPUs for enterprise customers). To the extent Google wins in enterprise it may be by capturing the next generation of startups that are AI first and, by definition, data light; a new company has the freedom to base its decision on infrastructure and integration.

AWS

Amazon is certainly hoping that argument is correct: the company is operating as if everything in the AI value chain is modular and ultimately a commodity, which implies that it believes that data gravity will matter most. What is difficult to separate is to what extent this is the correct interpretation of the strategic landscape versus a convenient interpretation of the facts that happens to perfectly align with Amazon’s strengths and weaknesses, including infrastructure that is heavily optimized for commodity workloads.

Microsoft

Microsoft, meanwhile, is, as I noted above, in the middle, but not entirely by choice. Last October on the company’s earnings call Nadella talked extensively about how the company was optimizing its infrastructure around OpenAI:

It is true that the approach we have taken is a full stack approach all the way from whether it’s ChatGPT or Bing Chat or all our Copilots, all share the same model. So in some sense, one of the things that we do have is very, very high leverage of the one model that we used, which we trained, and then the one model that we are doing inferencing at scale. And that advantage sort of trickles down all the way to both utilization internally, utilization of third parties, and also over time, you can see the sort of stack optimization all the way to the silicon, because the abstraction layer to which the developers are riding is much higher up than low-level kernels, if you will.

So, therefore, I think there is a fundamental approach we took, which was a technical approach of saying we’ll have Copilots and Copilot stack all available. That doesn’t mean we don’t have people doing training for open source models or proprietary models. We also have a bunch of open source models. We have a bunch of fine-tuning happening, a bunch of RLHF happening. So there’s all kinds of ways people use it. But the thing is, we have scale leverage of one large model that was trained and one large model that’s being used for inference across all our first-party SaaS apps, as well as our API in our Azure AI service…

The lesson learned from the cloud side is — we’re not running a conglomerate of different businesses, it’s all one tech stack up and down Microsoft’s portfolio, and that, I think, is going to be very important because that discipline, given what the spend like — it will look like for this AI transition any business that’s not disciplined about their capital spend accruing across all their businesses could run into trouble.

Then, one month later, OpenAI nearly imploded and Microsoft had to face the reality that it is exceptionally risky to pin your strategy on integrating with a partner you don’t control; much of the company’s rhetoric — including the Nadella quote I opened this Article with — and actions since then have been focused on abstracting models away, particularly through the company’s own managed AI development platform, in an approach that looks more similar to Amazon’s. I suspect the company would actually like to lean more into integration, and perhaps still is (including acqui-hiring its own model and model-building team), but it has to hedge its bets.

Nvidia

All of this is, I think, good news for Nvidia. One underdiscussed implication of the rise of LLMs is that Nvidia’s CUDA moat has been diminished; the vast majority of development in AI is no longer happening with CUDA libraries, but rather on top of LLMs. That does, in theory, make it more likely that alternative GPU providers, whether that be AMD or hyperscalers’ internal efforts, put a dent in Nvidia’s dominance and margins.

Nvidia, though, is hardly resting on its moat: the company is making its GPUs more flexible over time, promising that its next generation of chips will ship in double the configurations of the current generation, including a renewed emphasis on Ethernet networking. This approach will maximize the Nvidia’s addressable market, driving more revenue which the company is funneling back into a one-year iteration cycle that promises to keep the chip-maker ahead of the alternatives.

I suspect that the only way to overcome this performance advantage, at least in the near term, will be through true vertical integration a la Google; to put it another way, while Google’s TPUs will remain a strong alternative, I am skeptical that hyperscaler internal chip efforts will be a major threat for the foreseeable future. Absent full stack integration those efforts are basically reduced to trying to make better chips than Nvidia, and good luck with that! Even AMD is discovering that a good portion of its GPU sales are a function of Nvidia scarcity.

Meta

This also explains Meta’s open source approach to Llama: the company is focused on products, which do benefit from integration, but there are also benefits that come from widespread usage, particularly in terms of optimization and complementary software. Open source accrues those benefits without imposing any incentives that detract from Meta’s product efforts (and don’t forget that Meta is receiving some portion of revenue from hyperscalers serving Llama models).

AI or AGI

The one company that I have not mentioned so far — at least in the context of AI — is Apple. The iPhone maker, like Amazon, appears to be betting that AI will be a feature or an app; like Amazon, it’s not clear to what extent this is strategic foresight versus motivated reasoning.

It does, though, get at the biggest question of all: LLMs are already incredible, and there is years of work to be done to fully productize the capabilities that exist today; are even better LLMs, though, capable of disrupting not just search but all of computing? To the extent that the answer is yes, the greater advantage I think that Google’s integrated approach will have, for the reasons Christensen laid out: achieving something approaching AGI, whatever that means, will require maximizing every efficiency and optimization, which rewards the integrated approach.

I am skeptical: I think that models will certainly differ, but not in a large enough way to not be treated as commodities; the most value will be derived from building platforms that treat models like processors, delivering performance improvements to developers who never need to know what is going on under the hood. This will mean the biggest benefits will accrue to horizontal reach — on the API layer, the model layer, and the GPU layer — as opposed to vertical integration; it is up to Google to prove me wrong.

I wrote a follow-up to this Article in this Daily Update.


Windows Returns

Full disclosure: I didn’t have any plans to write this Article; I had various reasons to be in the U.S. this week, and Microsoft’s Build developer conference, which kicks off today, happened to fit in my schedule. It wasn’t until a couple of days ago that I even realized there was a Windows and Surface event the day before, so hey, why not attend?

Of course I knew that AI would be the focus; Microsoft made a big deal of adding an AI button to Windows PCs earlier this year, and the company’s Surface event last fall wasn’t about the Surface at all, but rather about Windows Copilot, the company’s omnipresent brand name for its various AI assistants.

Yesterday, though, a whole host of various threads that Microsoft has been working on for years suddenly came together in one of the most compelling events I’ve attended in a long time. Windows, of all things, suddenly feels revitalized, and Microsoft has both made and fostered hardware that feels meaningfully differentiated from other devices on the market. It is, in many respects, the physical manifestation of CEO Satya Nadella’s greatest triumph.

Copilot+ PCs

I should start by noting that some things never change: Microsoft’s branding veers wildly between a lack of clarity due to too many brands for similar concepts to too many concepts under one brand. In this case both the company and its partners have been talking about “AI PCs” for a while; for example, from the January announcement of that aforementioned Copilot key:

Today, we are excited to take the next significant step forward and introduce a new Copilot key to Windows 11 PCs. In this new year, we will be ushering in a significant shift toward a more personal and intelligent computing future where AI will be seamlessly woven into Windows from the system, to the silicon, to the hardware. This will not only simplify people’s computing experience but also amplify it, making 2024 the year of the AI PC.

I don’t want to begrudge the effort that I’m sure went into introducing a new key onto Windows PCs, but “significant step forward” seems a bit much, particularly given the inherent challenges entailed in a licensing model: in this post-ChatGPT world even my washing machine suddenly has AI, and it seemed inevitable that the crappiest notebook that can barely hold a charge or load a webpage without its fans spooling up like a jet engine would now be christened an “AI PC.”

That is why yesterday brought a new brand: Copilot+ PCs. Yes, it’s a bit of a mouthful, but it’s a trademark Microsoft owns, and it won’t be handed out willy nilly; to qualify as a “Copilot+ PC” a computer needs distinct CPUs, GPUs, and NPUs (neural processing units) capable of >40 trillion operations per second (TOPS), and a minimum of 16 GB RAM and a 256 GB SSD. These aren’t supercomputers, but that is a pretty impressive baseline — the MacBook Air wouldn’t qualify, for example, as it only has 18 TOPS (and starts with only 8 GB of RAM).1

This guaranteed baseline lets Microsoft build some genuinely new experiences. The headline feature is Recall: Copilot+ PCs will record everything that happens on your computer locally, and make it available to Copilot-mediated queries; developers can add “breadcrumbs” to their apps so that you can not just return to a specific app but also the exact context you wanted to Recall. That last bit gets at how Recall is better than Rewind, the Mac app that provides similar functionality; by being built into the operating system Recall can both be extended to developers even as it is made fundamentally more secure and private, with lower energy usage, thanks to the way it leverages Copilot+ level hardware.

Another fun and downright whimsical feature is Cocreator, which lets you not just edit images using AI, but also create new ones using a combination of drawing and text prompts; I tried it out and it works pretty well, with the main limitation being some amount of latency in rendering the image.

That latency, frustratingly enough, doesn’t come from the actual rendering, which happens locally on that beefy hardware, but rather the fact that Cocreator validates everything with the cloud for “safety”; never mind that you can create “unsafe” images in Paint of your own volition (at least for now).

What will be most intriguing, though, is the extent to which these capabilities will be available to 3rd-party developers. The keynote included demos from Adobe’s family of apps, DaVinci Resolve Studio, CapCut, and more, and I presume there will be much broader discussions about what is possible at the actual Build conference. The trick for Microsoft will be in getting Copilot+ PCs at critical mass such that developers build capabilities that actually utilize the hardware in question.

The hardware at the keynote, meanwhile, was based on Qualcomm’s Snapdragon X Elite ARM processor. The X Elite was built by the Nuvia team, a startup Qualcomm acquired in 2021; that team is made up of some of the creators of Apple Silicon, and while we will need to wait for benchmarks to verify Microsoft’s claims of better-than-MacBook-Air performance and battery life, it appears to be the real deal.

It’s also the culmination of a 12 year journey to move Windows to ARM: the original Surface tablet was painfully slow, a weakness that was exacerbated by the fact that basically no 3rd-party software was built to run on ARM. Today the latter situation is much improved, but more importantly, Microsoft is making big promises about the Snapdragon X performance being good enough that Windows’ new Rosetta-like emulation layer should make the experience of using x86-compiled apps seamless.

The end result — assuming that reviewed performance measures up to Microsoft’s claims — is an array of hardware from both Microsoft and its OEM partners that is MacBook Air-esque, but, unlike Apple’s offering, actually meaningfully integrated with AI in a way that not only seems useful today, but also creates the foundation to be dramatically more useful as developers leverage Microsoft’s AI capabilities going forward. I’m not going to switch (yet), but it’s the first time I’ve been tempted; at a minimum the company set a clear bar for Apple to clear at next month’s WWDC.

Walmart’s E-Commerce Separation

Last month I had the opportunity to interview Walmart CEO Doug McMillon. I have occasionally written about Walmart over the years, and I intend to do more, as the company emerges as a genuine contender in e-commerce.

Walmart is succeeding exactly how you thought they might: with an omnichannel approach that leverages their stores to offer delivery, pick-up, and in-person shopping options, including a dominant position in grocery, a nut that Amazon has struggled to crack. Around this model the company is building out all of the other necessary pieces of an at-scale e-commerce operation, including a 3rd-party marketplace and a very compelling advertising business. What is fascinating — and this was a theme that emerged throughout the interview — is the circuitous path that Walmart took to get there.

Go back to 2016, when the company acquired Jet.com and I wrote Walmart and the Multichannel Trap; I hearkened back to Walmart’s past e-commerce pronouncements and pointed out how half-baked everything was:

The fulfillment program Anderson went on to describe was ridiculously complex: “fast” shipped anything online to your local store, “faster” shipped a smaller selection to your house, while “fastest” made an even smaller selection available for pickup the same day. Anderson concluded:

“Fast, faster, fastest. What a great example of a continuous channel experience that cannot easily be replicated.”

What a positively Buldakian statement! Of course such an experience “cannot easily be replicated”, because who would want to? It was, like Sears’ “socks-to-stocks” strategy, driven by solipsism: instead of starting with customer needs and working backwards to a solution, Walmart started with their own reality and created a convoluted mess. Predictably it failed.

The problem Walmart had was that every aspect of the company was oriented around the stores; that by extension meant that new initiatives, like e-commerce, had to fit in that reality, even if it resulted in a terrible experience for customers. That was why I liked the Jet.com acquisition; I wrote in an Update:

Walmart, meanwhile, finally realized a couple of months ago that while they are really good at retail logistics, that skill doesn’t translate to e-commerce in the way they hoped. I wrote in June:

All of those analysts who assumed Wal-Mart would squish Amazon in e-commerce thanks to their own mastery of logistics were like all those who assumed Microsoft would win mobile because they won PCs. It turns out that logistics for retail are to logistics for e-commerce as operating systems for a PC are to operating systems for a phone. They look similar, and even have the same name, but require fundamentally different assumptions and priorities.

Walmart promised at the time to invest $2 billion in technology and logistics, but given Amazon’s continued encroachment the company has far more money than time: paying a bit more to get technology and infrastructure already built out is a very good choice…Ideally Walmart will keep Jet.com at arms-length: that’s the prescribed response for an incumbent dealing with a disruptive competitor. There are simply too many incentives for incumbent companies to kill new initiatives that by definition threaten the core business, and while Walmart’s executives seem to have finally learned that extending a bricks-and-mortar business model online doesn’t work, it always takes even longer for that lesson to filter down to middle managers primarily motivated by their own specific responsibilities that often aren’t aligned with the future.

Fast forward to today, and Jet.com redirects to Walmart.com, and e-commerce is, as I noted, integrated with retail, but that doesn’t mean I was wrong. McMillon noted in the interview:

It was always the plan to bring things together, but just like the structure, it needed to be separate for a while for good reasons.

The reason is that tech, given its reliance on massive investments on a scalable platform, is inherently centralizing; that, though, is directly counter to how Walmart traditionally operated. McMillon explained:

Taking ownership all the way down to department manager level for toys and store number #1113 has great value in it, and when a buyer feels like they’re really responsible for their category, there’s great value in that, but we have to, on behalf of the customer and in a customer-led way, have top down decision-making to say, “No, we’re not going to just respond to what you, the buyer, want the next tech priority to be”.

We’ve actually set the tech priorities driven off what we want to build for customers and what they’re asking us to solve, and that’s how it’s going to be, and that is a cultural tension even today because we actually want some of both, we want ownership. We don’t want to diminish that ownership and our store managers, they make this company go, and they make a lot of great decisions, and they’re fantastic. You may have read recently, we increased their pay and we need a tech team and a design team, a product management team and leaders that can identify priorities and make sure they get resourced.

But take a marketplace, we can’t build a marketplace one country at a time, you build one marketplace. So there have to be people that are willing to give up authority so that that gets done in a way that’s most efficient and we’re doing that now, but I think that tension is going to be here forever.

In other words, Walmart needed to build up e-commerce independent of its stores; only then, once its e-commerce operation was a viable business in its own right, and as a new generation of leadership in retail recognized its inherent value, could the company achieve the omni-channel dreams it had harbored for so long.

The End of Windows

Back to Microsoft: the fundamental problem Nadella faced when took over Microsoft was that every aspect of the company — including, most problematically, the culture — was built around Windows. I wrote in Microsoft’s Monopoly Hangover:

The company loved to brag about its stable of billion dollar businesses, but in truth they were all components of one business — Windows. Everything Microsoft built from servers to productivity applications was premised on the assumption that the vast majority of computing devices were running Windows, leaving the company completely out of sorts when the iPhone and Android created and captured the smartphone market.

Former CEO Steve Ballmer couldn’t accept this reality: in his disastrous last few months he reorganized the company around a One Microsoft concept that really meant that the rest of the company needed to better support Windows, and doubled down by buying Nokia in a deal that made no sense, all while Microsoft’s productivity applications were suffering from a lack of apps for iOS and Android.

What Nadella did after he took over was not particularly complicated, but it was exceptionally difficult. I wrote in 2018’s The End of Windows:

The story of how Microsoft came to accept the reality of Windows’ decline is more interesting than the fact of Windows’ decline; this is how CEO Satya Nadella convinced the company to accept the obvious.

I then documented a few seminal decisions made to demote windows, including releasing Office on iPad as soon as he took over, explicitly re-orienting Microsoft around services instead of devices, isolating the Windows organization from the rest of the company, killing Windows Phone, and finally, in the decision that prompted that Article, splitting up Windows itself. Microsoft was finally, not just strategically but also organizationally, a services company centered on Azure and Office; yes, Windows existed, and still served a purpose, but it didn’t call the shots for the rest of Microsoft’s products.

And yet, here I am in May 2024, celebrating a Windows event! That celebration, though, is not because Windows is differentiating the rest of Microsoft, but because the rest of Microsoft is now differentiating Windows. Nadella’s focus on AI and the company’s massive investments in compute are the real drivers of the business, and, going forward, are real potential drivers of Windows.

This is where the Walmart analogy is useful: McMillon needed to let e-commerce stand on its own and drive the development of a consumer-centric approach to commerce that depended on centralized tech-based solutions; only then could Walmart integrate its stores and online services into an omnichannel solution that makes the company the only realistic long-term rival to Amazon.

Nadella, similarly, needed to break up Windows and end Ballmer’s dreams of vertical domination so that the company could build a horizontal services business that, a few years later, could actually make Windows into a differentiated operating system that might, for the first time in years, actually drive new customer acquisition.


  1. It’s also unclear how specifically TOPS are defined, as the precision of the calculations used for measurements makes a big difference. 

The Great Flattening

Apple did what needed to be done to get that unfortunate iPad ad out of the news; you know, the one that somehow found the crushing of musical instruments and bottles of paint to be inspirational:

The ad was released as a part of the company’s iPad event, and was originally scheduled to run on TV; Tor Myhren, Apple’s vice-president of marketing communications, told AdAge:

Creativity is in our DNA at Apple, and it’s incredibly important to us to design products that empower creatives all over the world…Our goal is to always celebrate the myriad of ways users express themselves and bring their ideas to life through iPad. We missed the mark with this video, and we’re sorry.

The apology comes across as heartfelt — accentuated by the fact that an Apple executive put his name to it — but I disagree with Myhren: the reason why people reacted so strongly to the ad is that it couldn’t have hit the mark more squarely.

Aggregation Theory

The Internet, birthed as it was in the idealism of California tech in the latter parts of the 20th century, was expected to be a force for decentralization; one of the central conceits of this blog has been to explain why reality has been so different. From 2015’s Aggregation Theory:

The fundamental disruption of the Internet has been to turn this dynamic on its head. First, the Internet has made distribution (of digital goods) free, neutralizing the advantage that pre-Internet distributors leveraged to integrate with suppliers. Secondly, the Internet has made transaction costs zero, making it viable for a distributor to integrate forward with end users/consumers at scale.

Aggregation Theory

This has fundamentally changed the plane of competition: no longer do distributors compete based upon exclusive supplier relationships, with consumers/users an afterthought. Instead, suppliers can be commoditized leaving consumers/users as a first order priority. By extension, this means that the most important factor determining success is the user experience: the best distributors/aggregators/market-makers win by providing the best experience, which earns them the most consumers/users, which attracts the most suppliers, which enhances the user experience in a virtuous cycle.

In short, the analog world was defined by scarcity, which meant distribution of scarce goods was the locus of power; the digital world is defined by abundance, which means discovery of what you actually want to see is the locus of power. The result is that consumers have access to anything, which is to say that nothing is special; everything has been flattened.

  • Google broke down every publication in the world into individual pages; search results didn’t deliver you to the front page of a newspaper or magazine, but rather dropped you onto individual articles.
  • Facebook promoted user-generated content to the same level of the hierarchy as articles from professional publications; your feed might have a picture of your niece followed by a link to a deeply-reported investigative report followed by a meme.
  • Amazon created the “Everything Store” with practically every item on Earth and the capability to deliver it to your doorstep; instead of running errands you could simply check out.
  • Netflix transformed “What’s on?” to “What do you want to watch?”. Everything from high-brow movies to budget flicks to prestige TV to reality TV was on equal footing, ready to be streamed whenever and wherever you wanted.
  • Sites like Expedia and Booking changed travel from an adventure mediated by a travel agent or long-standing brands to search results organized by price and amenities.

Moreover, this was only v1; it turns out that the flattening can go even further:

  • LLMs are breaking down all written text ever into massive models that don’t even bother with pages: they simply give you the answer.
  • TikTok disabused Meta of the notion that your relationships were a useful constraint on the content you wanted to see; now all short-form video apps surface content from across the entire network based on their understanding of what you individually are interested in.
  • Amazon is transforming into a logistics powerhouse befitting the fact that Amazon.com is increasingly dominated by 3rd-party merchant sales, and extending that capability throughout the economy.
  • All of Hollywood, convinced that content was what mattered, jointly killed the linear TV model to ensure that all professionally-produced content was available on-demand, even as YouTube became the biggest streamer of all with user-generated content that is delivered through the exact same distribution channel (apps on a smart device) as the biggest blockbusters.
  • Services like Uber and Airbnb commoditized transportation and lodging to the individual driver or homeowner.

Apple is absent from this list, although the App Store has had an Aggregator effect on developers; the reason the company belongs, though, and why they were the only company that could make an ad that so perfectly captures this great flattening, is because they created the device on which all of these services operate. The prerequisite to the commoditization of everything is access to anything, thanks to the smartphone. “There’s an app for that” indeed:

This is what I mean when I say that Apple’s iPad ad hit the mark: the reason why I think the ad resonated so deeply is that it captured something deep in the gestalt that actually has very little to do with trumpets or guitars or bottles of paint; rather, thanks to the Internet — particularly the smartphone-denominated Internet — everything is an app.

The Bicycle for the Mind

The more tangible way to see in which that iPad ad hit the mark it to play it in reverse:

This is without question the message that Apple was going for: this one device, thin as can be, contains musical instruments, an artist’s studio, an arcade machine, and more. It brings relationships without borders to life, complete with cute emoji. And that’s not wrong!

Indeed, it harkens back to one of Steve Jobs’ last keynotes, when he introduced the iPad 2. My favorite moment in that keynote — one of my favorite Steve Jobs’ keynote moments ever, in fact — was the introduction of GarageBand. You can watch the entire introduction and demo, but the part that stands out in my memory is Jobs — clearly sick, in retrospect — moved by what the company had just produced:

I’m blown away with this stuff. Playing your own instruments, or using the smart instruments, anyone can make music now, in something that’s this thick and weighs 1.3 pounds. It’s unbelievable. GarageBand for iPad. Great set of features — again, this is no toy. This is something you can really use for real work. This is something that, I cannot tell you, how many hours teenagers are going to spend making music with this, and teaching themselves about music with this.

Jobs wasn’t wrong: global hits have originated on GarageBand, and undoubtedly many more hours of (mostly terrible, if my personal experience is any indication) amateur experimentation. Why I think this demo was so personally meaningful for Jobs, though, is that not only was GarageBand about music, one of his deepest passions, but it was also a manifestation of his life’s work: creating a bicycle for the mind.

I remember reading an Article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth. How many kilocalories did they expend to get from point A to point B, and the condor won: it came in at the top of the list, surpassed everything else. And humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.

But somebody there had the imagination to test the efficiency of a human riding a bicycle. Human riding a bicycle blew away the condor, all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes, and so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities.

I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes, but I think that’s nothing compared to what’s coming in the next 100 years.

In Jobs’ view of the world, teenagers the world over are potential musicians, who might not be able to afford a piano or guitar or trumpet; if, though, they can get an iPad — now even thinner and lighter! — they can have access to everything they need. In this view “There’s an app for that” is profoundly empowering.

After the Flattening

The duality of Apple’s ad speaks to the reality of technology: its impact is structural, and amoral. When I first started Stratechery I wrote a piece called Friction:

If there is a single phrase that describes the effect of the Internet, it is the elimination of friction. With the loss of friction, there is necessarily the loss of everything built on friction, including value, privacy, and livelihoods. And that’s only three examples! The Internet is pulling out the foundations of nearly every institution and social more that our society is built upon.

Count me with those who believe the Internet is on par with the industrial revolution, the full impact of which stretched over centuries. And it wasn’t all good. Like today, the industrial revolution included a period of time that saw many lose their jobs and a massive surge in inequality. It also lifted millions of others out of sustenance farming. Then again, it also propagated slavery, particularly in North America. The industrial revolution led to new monetary systems, and it created robber barons. Modern democracies sprouted from the industrial revolution, and so did fascism and communism. The quality of life of millions and millions was unimaginably improved, and millions and millions died in two unimaginably terrible wars.

Change is guaranteed, but the type of change is not; never is that more true than today. See, friction makes everything harder, both the good we can do, but also the unimaginably terrible. In our zeal to reduce friction and our eagerness to celebrate the good, we ought not lose sight of the potential bad.

Today that exhortation might run in the opposite direction: in our angst about the removal of specialness and our eagerness to criticize the bad, we ought not lose sight of the potential good.

Start with this site that you are reading: yes, the Internet commoditized content that was previously granted value by virtue of being bundled with a light manufacturing business (i.e. printing presses and delivery trucks), but it also created the opportunity for entirely new kinds of content predicated on reaching niche audiences that are only sustainable when the entire world is your market.

The same principle applies to every other form of content, from music to video to books to art; the extent to which being “special” meant being scarce is the extent to which the existence of “special” meant a constriction of opportunity. Moreover, that opportunity is not a function of privilege but rather consumer demand: the old powers may decry that their content is competing with everyone on the Internet, but they are only losing to the extent that consumers actually prefer to read or watch or listen to something else. Is this supposed to be a bad thing?

Moreover, this is just as much a feather in Apple’s cap as the commoditization of everything is a black mark: Apple creates devices — tools — that let everyone be a creator. Indeed, that is why the ad works in both directions: the flattening of everything means there has been a loss; the flattening of everything also means there is entirely new opportunity.

The AI Choice

One thing I do credit Apple for is not trying to erase the ad from the Internet — it’s still posted on CEO Tim Cook’s X account — because I think it’s important not just as a marker of what has happened over the last several years, but also the choices facing us in the years ahead.

The last time I referenced Steve Jobs’ “Bicycle of the Mind” analogy was in 2018’s Tech’s Two Philosophies, where I contrasted Google and Facebook on one side, and Microsoft and Apple on the other: the former wanted to create products that did things for you; the latter products that let you do more things. This was a simplified characterization, to be sure, but, as I noted in that Article, it was also related to their traditional positions as Aggregators and platforms, respectively.

What is increasingly clear, though, is that Jobs’ prediction that future changes would be even more profound raise questions about the “bicycle for the mind” analogy itself: specifically, will AI be a bicycle that we control, or an unstoppable train to destinations unknown? To put it in the same terms as the ad, will human will and initiative be flattened, or expanded?

The route to the former seems clear, and maybe even the default: this is a world where a small number of entities “own” AI, and we use it — or are used by it — on their terms. This is the outcome being pushed by those obsessed with “safety”, and demanding regulation and reporting; that those advocates also seem to have a stake in today’s leading models seems strangely ignored.

The alternative — MKBHDs For Everything — means openness and commoditization. Yes, those words have downsides: they mean that the powers that be are not special, and sometimes that is something we lament, as I noted at the beginning of this Article. Our alternative, though, is not the gatekept world of the 20th century — we can’t go backwards — but one where the flattening is not the elimination of vitality but the tilling of the ground so that something — many things — new can be created.

Meta and Reasonable Doubt

This Article is available as a video essay on YouTube


Stop me if you heard this one before. From Bloomberg:

Mark Zuckerberg is asking investors for patience again. Instead, they’re alarmed. After Meta Platforms Inc. revealed that it will spend billions of dollars more than expected this year — fueled by investments in artificial intelligence — the company’s chief executive officer did his best to soothe Wall Street. But the spending forecast, coupled with slower sales growth than anticipated, sent the shares tumbling as much as 16% in New York on Thursday morning, the biggest drop since October 2022.

18 months ago I created a meme for exactly this occassion:

The GTA meme about "Here we go again" as applied to Facebook

I posted that meme in Meta Myths, where I argued that Meta was in far better shape than investors realized:

  • Users were not deserting Facebook.
  • Instagram engagement was growing.
  • TikTok growth had leveled out.
  • Digital advertising was recovering from ATT.
  • Meta’s increase in capital expenditures — which we now know was mostly for Nvidia GPUs — was justified.

Notice, though, that the meme implies this happened more than once. Indeed, in 2018 I had written Facebook Lenses after another stock market meltdown:

  • Using a financial lens, Facebook revenue was in good shape but growing costs were a concern.
  • Using a product lens, Facebook was in very good shape given the growth opportunities available in its non-Facebook-app properties.
  • Using an advertising lens, Facebook was in very good shape given the quality of its infrastrcture.
  • Using a strategic lens, Facebook’s moats were deeper than ever, thanks in part to regulation.
  • Using a “reason-to-exist” lens, Facebook was, as it had been from its founding, underrated by folks who didn’t understand how powerful digitizing the connection between friends and family was.

Given this history, you might think that I’m here to once again raise the Meta flag and declare investors insane; in fact, this time is different: I understand the market’s reaction and, at least partially, share its skepticism about Meta’s short to medium-term future. The big question is the long run.

Meta’s Short-Term Capex Costs

There was one consistent theme across the big tech earnings calls last week: spend, baby, spend! From Google’s earnings call:

With respect to CapEx, our reported CapEx in the first quarter was $12 billion, once again driven overwhelmingly by investment in our technical infrastructure with the largest component for servers followed by data centers. The significant year-on-year growth in CapEx in recent quarters reflects our confidence in the opportunities offered by AI across our business. Looking ahead, we expect quarterly CapEx throughout the year to be roughly at or above the Q1 level, keeping in mind that the timing of cash payments can cause variability in quarterly reported CapEx.

From Microsoft’s earnings call:

Capital expenditures, including finance leases, were $14 billion to support our cloud demand, inclusive of the need to scale our AI infrastructure. Cash paid for PP&E was $11 billion. Cash flow from operations was $31.9 billion, up 31%, driven by strong cloud billings and collections…We expect capital expenditures to increase materially on a sequential basis driven by cloud and AI infrastructure investments. As a reminder, there can be normal quarterly spend variability in the timing of our cloud infrastructure build-outs and the timing of finance leases. We continue to bring capacity online as we scale our AI investments with growing demand. Currently, near-term AI demand is a bit higher than our available capacity.

From Meta’s earnings call:

We anticipate our full-year 2024 capital expenditures will be in the range of $35-40 billion, increased from our prior range of $30-37 billion as we continue to accelerate our infrastructure investments to support our AI roadmap. While we are not providing guidance for years beyond 2024, we expect capex will continue to increase next year as we invest aggressively to support our ambitious AI research and product development efforts.

The market reception to Microsoft and Google, though, could not be more different than the reaction to Meta; from Bloomberg:

Microsoft Corp. and Google owner Alphabet Inc. sent a clear message to investors on Thursday: Our spending on artificial intelligence and cloud computing is paying off. The companies trounced Wall Street estimates with their latest quarterly results, lifted by a surge in cloud revenue — fueled in part by booming use of AI services. Alphabet shares surged as much as 12%, the biggest gain since July 2015, to their highest level ever. The rally pushed Alphabet’s valuation past $2 trillion. Microsoft rose as much as 3.5%.

The tech titans have been locked in a fierce battle for dominance in the field of artificial intelligence, with Microsoft joining forces with startup OpenAI to challenge Google’s two-decade stranglehold on internet search. But Thursday’s results showed there’s ample room for both companies to grow.

Microsoft is the best positioned to benefit from AI in the short-term: first, they have a cloud business in Azure that sells to enterprises eager to implement AI into their businesses; Azure grew 31% with 7 points of that growth coming from AI services.1 Second, they have an enterprise software business that can expand its average revenue per user by selling AI services; Microsoft didn’t report how many CoPilot seat licenses they sold, but they did say it contributed to Office 365’s 15% growth in commercial revenue. These clear opportunities for monetization, along with the potential upside of AI generally, should and do make investors happy about the additional capex investment.

Google also clearly benefits from AI, particularly in terms of Google Cloud. I wrote a very positive overview of Google Cloud’s prospects earlier this month because the story is so clear: bringing Google’s massive infrastructure advantages to bear in the Cloud is a very straightforward proposition with very significant upside. Google can not only expand ARPU with existing customers, but also has a wedge to win new customers, who will then potentially shift the rest of their cloud spending to Google Cloud.

Google’s consumer story is a bit more complicated: there is obvious consumer utility in AI-powered search results, but giving an answer instead of a set of links is a challenge both to Google’s business model and to its ability to serve the entire market without controversy. Even despite those challenges, though, there are benefits in terms of improved recommendations, better ad targeting, generative advertisements, etc. On balance the increased investment is clearly worth it.

Meta doesn’t have as many clear applications in terms of short-term revenue opportunities: the company does not operate a cloud business, so no one is consuming — i.e. paying for — the use of Meta’s capex other than Meta itself. Meanwhile, the company’s consumer prospects are broadly similar to Google’s: yes, Meta can and is improving recommendations and ad targeting and implementing generative advertising, and also has the prospect of making click-to-message ads viable for businesses that can’t afford to pay for a human on the other end of a chat session. Still, the short-term upside is not nearly as clear as it is for Microsoft and Google.

Meta’s Advertising Cycle and the Mid-Term

The other big business opportunity Zuckerberg emphasized on the earnings call was MetaAI, which he compared to Stories and Reels:

You all know our product development playbook by this point. We release an early version of a product to a limited audience to gather feedback and start improving it, and then once we think it’s ready then we make it available to more people. That early release was last fall and with this release we’re now moving to that next growth phase of our playbook. We believe that Meta AI with Llama 3 is now the most intelligent AI assistant that you can freely use. And now that we have the superior quality product, we’re making it easier for lots of people to use it within WhatsApp, Messenger, Instagram, and Facebook.

But then what? Zuckerberg offered some vague ideas later in the call:

I do think that there will be an ability to have ads and paid content in Meta AI interactions over time as well as people being able to pay for, whether it’s bigger models or more compute or some of the premium features and things like that. But that’s all very early in fleshing out.

Stories and Reels were not complicated in this regard: sure, Stories ads needed to have a larger visual component than Feed ads, and Reels ads are better if they are video, but at the end of the day the ad unit is the same across all of Meta’s channels. MetaAI, on the other hand, is going to require a completely different approach. I’m not saying that Meta won’t figure it out — Google needs to experiment here as well, and one’s experimentation will likely help the other — but the long-term revenue opportunity is not nearly as clearcut as Zuckerberg is painting it out to be.

What is clear is that figuring this out will take time, which is a concern given where Meta is in its advertising cycle. Long-time Stratechery subscribers know that I frequently reference this chart while reviewing Meta’s earnings:

Meta's various growth metrics

The most important thing to understand about this chart is that the growth in impressions is usually inversely correlated with the growth in price-per-ad, which makes intuitive sense: more impressions means more supply which, with an equivalent amount of demand, results in lower prices. The two big exceptions are related to Apple’s App Tracking Transparency changes: in 2022 the price-per-ad decelerated much more quickly than impressions grew, as Meta dealt with the loss of deterministic signal it suffered because of ATT; then, in 2023, the price-per-ad grew even as impressions grew as Meta reworked its targeting to operate probabilistically instead of deterministically.

Setting those two anomalies aside, there are two-and-a-half inversions of impression and price-per-ad growth rates on this chart:

  • In 2017 Meta saturated the Instagram feed with ads; this led impressions growth to drop and the price-per-ad to increase; then, in 2018, Instagram started to monetize Stories, leading to increased growth in impressions and corresponding decreases in price-per-ad growth.
  • In 2020 Meta saturated Instagram Stories; this once again led impressions growth to drop and the price-per-ad to increase; then, while COVID provided a boost in 2021, 2022 saw a significant increase in Reels monetization, leading to increased growth in impressions and a decrease in price-per-ad growth (which, as noted above, was made more extreme by ATT).
  • Since the middle of last year Meta Impressions growth is once again dropping as Reels becomes saturated; this is leading to an increase in price-per-ad growth (although the lines have not yet crossed).

The most optimistic time for Meta’s advertising business is, counter-intuitively, when the price-per-ad is dropping, because that means that impressions are increasing. This means that Meta is creating new long-term revenue opportunities, even as its ads become cost competitive with more of its competitors; it’s also notable that this is the point when previous investor freak-outs have happened.

Notice, though, that this time is different; CFO Susan Li said on the earnings call:

One thing I’d share, for example, is that we actually grew conversions at a faster rate than we grew impressions over the course of this quarter. So we are — we’re expecting to — which basically suggests that our conversion grade is growing and is one of the ways in which our ads are becoming more performant.

This is true, but it also means this specific moment in time is a much less bullish one for Meta’s advertising business than past stock drops: impressions growth declining means that price-per-ad is the primary route for revenue growth, which will happen but will also open the door for more of Meta’s competitors. Yes, those future advertising opportunities that Zuckerberg talked about will probably lead to another inversion at some point, but not only are those opportunities uncertain as I noted, but they also are quite a ways in the future (and the bill for GPUs is today).

Meta’s Long-Term Prospects

The most interesting thing Zuckerberg said on the earnings call, meanwhile, was about the Metaverse and its relationship to AI:

In addition to our work on AI, our other long term focus is the metaverse. It’s been interesting to see how these two themes have come together.

This is clearest when you look at glasses. I used to think that AR glasses wouldn’t really be a mainstream product until we had full holographic displays — and I still think that will be awesome and is mature state of the product. But now it seems pretty clear that there’s also a meaningful market for fashionable AI glasses without a display. Glasses are the ideal device for an AI assistant because you can let them see what you see and hear what you hear, so they have full context on what’s going on around you as they help you with whatever you’re trying to do. Our launch this week of Meta AI with Vision on the glasses is a good example where you can now ask questions about things you’re looking at.

One strategy dynamic that I’ve been reflecting on is that an increasing amount of our Reality Labs work is going towards serving our AI efforts. We currently report on our financials as if Family of Apps and Reality Labs were two completely separate businesses, but strategically I think of them as fundamentally the same business with the vision of Reality Labs to build the next generation of computing platforms in large part so that way we can build the best apps and experiences on top of them. Over time, we’ll need to find better ways to articulate the value that’s generated here across both segments so it doesn’t just seem like our hardware costs increase as our glasses ecosystem scales but all the value flows to a different segment.

I have been arguing for a couple of years that generative AI is going to be critical to making the Metaverse a compelling place to be, primarily in the context of VR; you can make a similar case for AR, particularly in terms of talking to your assistant. It’s interesting that Zuckerberg is making a similar argument, but backwards: instead of talking about AI costing money that accrues to the hardware division, he is talking about hardware costing money that accrues to the AI division.

Regardless, the broader point about AI and the metaverse being complements seems right to me, and in the long run the metaverse should be the internal Meta AI customer that serves a similar function as Microsoft or Google’s cloud customers in terms of necessitating and monetizing huge amounts of capacity. That, though, is a bit frightening: in Meta Myths I conceded that the metaverse might not amount to anything, but that it was immaterial to the then swoon in Meta’s stock. In this case it seems more material: yes, there are applications for AI in Meta’s products, but the real upside may depend on Zuckerberg’s other big bet paying off.

That, in the end, gets at the real question for Meta shareholders: do you trust Zuckerberg? There was another big Meta sell-off in early 2022, and I wrote this in an Update:

To that end, just as Facebook’s product changes are evidence that TikTok is real competition, today’s stock price drop is also evidence of the benefit of founder control. Meta could have delayed its response to TikTok until ATT worked its way through the system, but instead the company is fundamentally changing its products at the very moment its results are the most impacted by Apple’s changes. The easier decision, particularly for a manager, would have been to wait a quarter or two, when the comps would have been easier, and the excuses clearer, but founders have the freedom to prioritize existential risks over financial ones.

Of course they also have the freedom to spend $10 billion on a speculative bet like the Metaverse, an amount that will “increase meaningfully” in 2022; Meta continues to be first and foremost a bet on Zuckerberg.

I do think that continues to be a bet worth making: Meta met the challenges in the first paragraph — thanks in part to its last big capex increase — and is making the right strategic moves to make that second paragraph pay off, even as it doubles down on AI. These are big bets, though, and I understand reasonable doubt in the meantime.



  1. One thing to keep in mind with Microsoft’s reporting is that this is where OpenAI spending — via Azure credits — shows up on the earnings report 

Meta and Open

This Article is available as a video essay on YouTube


Apple released the Vision Pro on February 2; 12 days later Meta CEO Mark Zuckerberg delivered his verdict:

Alright guys, so I finally tried Apple’s Vision Pro. And you know, I have to say that before this, I expected that Quest would be the better value for most people since it’s really good and it’s like seven times less expensive. But after using it I don’t just think that Quest is the better value — I think the Quest is the better product, period.

You can watch the video for Zuckerberg’s full — and certainly biased! — take, but the pertinent section for this Article came towards the end:

The reality is that every generation of computing has an open and a closed model, and yeah, in mobile, Apple’s closed model won. But it’s not always that way. If you go back to the PC era, Microsoft’s open model was the winner, and in this next generation, Meta is going to be the open model, and I really want to make sure that the open model wins out again. The future is not yet written.

John Gruber asked on Daring Fireball:

At the end, he makes the case that each new generation of computing devices has an open alternative and a closed one from Apple. (It’s interesting to think that these rivalries might be best thought of not as closed-vs.-open, but as Apple-vs.-the-rest-of-the-industry.) I’m not quite sure where he’s going with that, though, because I don’t really see how my Quest 3 is any more “open” than my Vision Pro. Are they going to license the OS to other headset makers?

Cue Zuckerberg yesterday:

Some updates on the metaverse today. We are releasing Meta Horizon OS, our operating system that powers Quest virtual and mixed reality headsets, and we are partnering with some of the best hardware companies out there to design new headsets that are optimized for all the different ways that people use this tech.

Now, in every era of computing, there are always open and closed models. Apple’s closed model basically went out. Phones are tightly controlled and you’re kind of locked into what they’ll let you do. But it doesn’t have to be that way. In the PC era, the open model won out. You can do a lot more things, install mods. You got more diversity of hardware, software and more. So our goal is to make it so the open model defines the next generation of computing again with the metaverse, glasses and headsets. That’s why we’re releasing our operating systems so that more companies can build different things on it.

It’s natural to view this announcement as a reaction to the Vision Pro, or perhaps to Google’s upcoming AR announcment at Google I/O, which is rumored to include a new Samsung headset. However, I think that this sells Zuckerberg and Meta’s strategic acumen short: this is an obvious next step, of a piece with the company’s recent AI announcements, and a clear missing piece in the overall metaverse puzzle.

Meta’s Market

Any question of strategy starts with understanding your market, so what is Meta’s? This is a trickier question than you might think, particularly on the Internet. It’s a definition that has particularly vexed regulators, as I laid out in Regulators and Reality; after describing why the FTC’s extremely narrow definition of “personal social networking” — which excluded everything from Twitter to Reddit to LinkedIn to TikTok to YouTube as Facebook competitors — didn’t make sense, I explained:

The far bigger problem, though, is that everything I just wrote is meaningless, because everything listed above is a non-rivalrous digital service with zero marginal costs and zero transactional costs; users can and do use all of them at the same time. Indeed, the fact that all of these services can and do exist for the same users at the same time makes the case that Facebook’s market is in fact phenomenally competitive.

What, though, is Facebook competing for? Competition implies rivalry, that is, some asset that can only be consumed by one service to the exclusion of others, and the only rivalrous good in digital services is consumer time and attention. Users only have one set of eyes, and only 24 hours in a day, and every second spent with one service is a second not spent with another (although this isn’t technically true, since you could, say, listen to one while watching another while scrolling a third while responding to notifications from a fourth, fifth, and sixth). Note the percentages in this chart of platform usage:

Users spend more time on TikTok than other social media platforms

The total is not 100, it is 372, because none of these services exclude usage of any of the others. And while Facebook is obviously doing well in terms of total users, TikTok in particular looms quite large when it comes to time, the only metric that matters:

Most American adults use multiple online services

This, of course, is why all of these services, including InstagramSnapchat, and YouTube are trying to mimic TikTok as quickly as possible, which, last time I checked, is a competitive response, not a monopolistic one. You can even grant the argument that Facebook tried to corner the social media market — whatever that is — a decade ago, but you have to also admit that here in 2021 it is clear that they failed. Competition is the surest sign that there was not actually any anticompetitive conduct, and I don’t think it is the FTC’s job to hold Facebook management accountable for failing to achieve their alleged goals.

This idea that time and attention is the only scarce resource on the Internet, and thus the only market that truly matters, is what undergirds Netflix’s shift in reporting away from members and towards engagement; the company has been saying for years that its competitors were not just other streaming services, but everything from YouTube to Twitch streaming to video games to social media. That has always been true, if you squint, but on the Internet, where everything is only a click (or app) away, it’s tangible.

Meta’s Differentiation

Defining the relevant market as time and attention has surprising implications, that even companies raised on the Internet, including Meta, sometimes miss. Indeed, there was a time when Meta might have agreed with the FTC’s definition, because the company made competitive decisions — both big successes and big mistakes — predicated on the assumption that your personal social network was the market that mattered.

Start with an Article I wrote in 2015 entitled Facebook and the Feed:

Zuckerberg is quite clear about what drives him; he wrote in Facebook’s S-1:

Facebook was not originally created to be a company. It was built to accomplish a social mission – to make the world more open and connected.

I am starting to wonder if these two ideas — company versus mission — might not be more in tension now than they have ever been in the past…I suspect that Zuckerberg for one subscribes to the first idea: that people find what others say inherently valuable, and that it is the access to that information that makes Facebook indispensable. Conveniently, this fits with his mission for the company. For my part, though, I’m not so sure. It’s just as possible that Facebook is compelling for the content it surfaces, regardless of who surfaces it. And, if the latter is the case, then Facebook’s engagement moat is less its network effects than it is that for almost a billion users Facebook is their most essential digital habit: their door to the Internet.

A year later and Facebook responded to what was then its most pressing threat, Snapchat, by putting Stories into Instagram. I wrote in The Audacity of Copying Well:

For all of Snapchat’s explosive growth, Instagram is still more than double the size, with far more penetration across multiple demographics and international users. Rather than launch a “Stories” app without the network that is the most fundamental feature of any app built on sharing, Facebook is leveraging one of their most valuable assets: Instagram’s 500 million users.

The results, at least anecdotally, speak for themselves: I’ve seen more Instagram stories in the last 24 hours than I have Snapchat ones. Of course a big part of this is the novelty aspect, which will fade, and I follow a lot more people on Instagram than I do on Snapchat. That last point, though, is, well, the point: I and my friends are not exactly Snapchat’s target demographic today, but for the service to reach its potential we will be eventually. Unless, of course, Instagram Stories ends up being good enough.

It was good enough — Instagram arrested Snapchat’s growth, while boosting its own engagement and user base — so score one for Zuckerberg, right? Instagram had a better network, so they won…or did they simply have more preexisting usage, which while based on a network, was actually incidental to it?

Fast forward a few years and Facebook’s big competitor was TikTok; I wrote in 2020’s The TikTok War:

All of this explains what makes TikTok such a breakthrough product. First, humans like video. Second, TikTok’s video creation tools were far more accessible and inspiring for non-professional videographers. The crucial missing piece, though, is that TikTok isn’t really a social network…by expanding the library of available video from those made by your network to any video made by anyone on the service, Douyin/TikTok leverages the sheer scale of user-generated content to generate far more compelling content than professionals could ever generate, and relies on its algorithms to ensure that users are only seeing the cream of the crop.

In a follow-up Update I explained why this was a blindspot for Facebook:

First, Facebook views itself first-and-foremost as a social network, so it is disinclined to see that as a liability. Second, that view was reinforced by the way in which Facebook took on Snapchat. The point of The Audacity of Copying Well is that Facebook leveraged Instagram’s social network to halt Snapchat’s growth, which only reinforced that the network was Facebook’s greatest asset, making the TikTok blindspot even larger.

I am, in the end, actually making the same point as the previous section: Meta’s relevant market is user time and attention; it follows that Meta’s differentiation is the fact it marshals so much user time and attention, and that said marshaling was achieved via social networking is interesting but not necessarily strategically relevant. Indeed, Instagram in the end simply copied TikTok, surfacing content from anywhere on your network, and did so to great success.

Llama 3

This is the appropriate framework to understand Meta’s AI strategy with its Llama family of models: Llama 3 was released last week, and like Llama 2, it is open source, or, perhaps more accurately, open weights (with the caveat that hyperscalers need a license to offer Llama as a managed model). I explained why open weights makes sense in a May 2023 Update predicting the Llama 2 release:

Meta isn’t selling its capabilities; rather, it sells a canvas for users to put whatever content they desire, and to consume the content created by other users. It follows, then, that Meta ought to be fairly agnostic about how and where that content is created; by extension, if Meta were to open source its content creation models, the most obvious place where the content of those models would be published is on Meta platforms. To put it another way, Meta’s entire business is predicated on content being a commodity; making creation into a commodity as well simply provides more grist for the mill.

What is compelling about this reality, and the reason I latched onto Zuckerberg’s comments in that call, is that Meta is uniquely positioned to overcome all of the limitations of open source, from training to verification to RLHF to data quality, precisely because the company’s business model doesn’t depend on having the best models, but simply on the world having a lot of them.

The best analogy for Meta’s approach with Llama is what the company did in the data center. Google had revolutionized data center design in the 2000s, pioneering the use of commodity hardware with software-defined functionality; Facebook didn’t have the scale to duplicate Google’s differentiation in 2011, so it went in the opposite direction and created the Open Compute Project. Zuckerberg explained what happened next in an interview with Dwarkesh Patel:

We don’t tend to open source our product. We don’t take the code for Instagram and make it open source. We take a lot of the low-level infrastructure and we make that open source. Probably the biggest one in our history was our Open Compute Project where we took the designs for all of our servers, network switches, and data centers, and made it open source and it ended up being super helpful. Although a lot of people can design servers the industry now standardized on our design, which meant that the supply chains basically all got built out around our design. So volumes went up, it got cheaper for everyone, and it saved us billions of dollars which was awesome.

Zuckerberg then made the analogy I’m referring to:

So there’s multiple ways where open source could be helpful for us. One is if people figure out how to run the models more cheaply. We’re going to be spending tens, or a hundred billion dollars or more over time on all this stuff. So if we can do that 10% more efficiently, we’re saving billions or tens of billions of dollars. That’s probably worth a lot by itself. Especially if there are other competitive models out there, it’s not like our thing is giving away some kind of crazy advantage.

It’s not just about having a better model, though: it’s about ensuring that Meta doesn’t have a dependency on any one model as well. Zuckerberg continued:

Here’s one analogy on this. One thing that I think generally sucks about the mobile ecosystem is that you have these two gatekeeper companies, Apple and Google, that can tell you what you’re allowed to build. There’s the economic version of that which is like when we build something and they just take a bunch of your money. But then there’s the qualitative version, which is actually what upsets me more. There’s a bunch of times when we’ve launched or wanted to launch features and Apple’s just like “nope, you’re not launching that.” That sucks, right? So the question is, are we set up for a world like that with AI? You’re going to get a handful of companies that run these closed models that are going to be in control of the APIs and therefore able to tell you what you can build?

For us I can say it is worth it to go build a model ourselves to make sure that we’re not in that position. I don’t want any of those other companies telling us what we can build. From an open source perspective, I think a lot of developers don’t want those companies telling them what they can build either. So the question is, what is the ecosystem that gets built out around that? What are interesting new things? How much does that improve our products? I think there are lots of cases where if this ends up being like our databases or caching systems or architecture, we’ll get valuable contributions from the community that will make our stuff better. Our app specific work that we do will then still be so differentiated that it won’t really matter. We’ll be able to do what we do. We’ll benefit and all the systems, ours and the communities’, will be better because it’s open source.

There is another analogy here, which is Google and Android; Bill Gurley wrote the definitive Android post in 2011 on his blog Above the Crowd:

Android, as well as Chrome and Chrome OS for that matter, are not “products” in the classic business sense. They have no plan to become their own “economic castles.” Rather they are very expensive and very aggressive “moats,” funded by the height and magnitude of Google’s castle. Google’s aim is defensive not offensive. They are not trying to make a profit on Android or Chrome. They want to take any layer that lives between themselves and the consumer and make it free (or even less than free). Because these layers are basically software products with no variable costs, this is a very viable defensive strategy. In essence, they are not just building a moat; Google is also scorching the earth for 250 miles around the outside of the castle to ensure no one can approach it. And best I can tell, they are doing a damn good job of it.

The positive economic impact of Android (and Chrome) is massive: the company pays Apple around $20 billion a year for default placement on about 30% of worldwide smartphones (and Safari on Apple’s other platforms), which accounts for about 40% of the company’s overall spend on traffic acquisition costs across every other platform and browser. That total would almost certainly be much higher — if Google were even allowed to make a deal, which might not be the case if Microsoft controlled the rest of the market — absent Android and Chrome.

Metaverse Motivations

Android is also a natural segue to this news about Horizon OS. Meta is, like Google before it, a horizontal services company funded by advertising, which means it is incentivized to serve everyone, and to have no one between itself and its customers. And so Meta is, like Google before it, spending a huge amount of money to build a contender for what Zuckerberg believes is a future platform. It’s also fair to note that Meta is spending a lot more than the $40 billion Google has put into Android, but I think it’s reasonable: the risk — and opportunity — for Meta in the metaverse is even higher than the risk Google perceived in smartphones.

Back in 2013, when Facebook was facing the reality that mobile was ending its dreams of being a platform in its own right, I wrote Mobile Makes Facebook Just an App; That’s Great News:

First off, mobile apps own the entire (small) screen. You see nothing but the app that you are using at any one particular time. Secondly, mobile apps are just that: apps, not platforms. There is no need for Facebook to “reserve space” in their mobile apps for partners or other apps. That’s why my quote above is actually the bull case for Facebook.

Specifically, it’s better for an advertising business to not be a platform. There are certain roles and responsibilities a platform must bear with regards to the user experience, and many of these work against effective advertising. That’s why, for example, you don’t see any advertising in Android, despite the fact it’s built by the top advertising company in the world. A Facebook app owns the entire screen, and can use all of that screen for what benefits Facebook, and Facebook alone.

This optimism was certainly borne out by Facebook’s astronomical growth over the last decade, which has been almost entirely about exploiting this mobile advertising opportunity. It also, at first glance, calls into question the wisdom of building Horizon OS, given the platform advertising challenges I just detailed.

The reality, though, is that a headset is fundamentally different than a smartphone: the latter is something you hold in a hand, and which an app can monopolize; the former monopolizes your vision, reducing an app to a window. Consider this PR image of the Vision Pro:

Meta doesn't want to be just an app

This isn’t the only canvas for apps in the Vision Pro: apps can also take over the entire view and provide an immersive experience, and I can imagine that Meta will, when and if the Vision Pro gains meaningful marketshare, build just that; remember, Meta is a horizontal services company, and that means serving everyone. Ultimately, though, Zuckerberg sees the chief allure of the metaverse as being about presence, which means the sensation of feeling like you are in the same place as other people enjoying the same experiences and apps; that, by extension, means owning the layer within which apps live — it means owning your entire vision.

Just as importantly — probably most importantly, to Zuckerberg — owning the OS means not being subject to Apple’s dictate on what can or cannot be built, or tracked, or monetized. And, as Zuckerberg noted in that interview, Meta isn’t particularly keen to subject itself to Google, either. It might be tempting for Meta’s investors to dismiss these concerns, but ATT should have focused minds about just how much this lack of control can cost.

Finally, not all devices will be platforms: Meta’s RayBan sunglasses, for example, could not be “just an app”; what they could be is that much better of a product if Apple made public the same sort of private APIs it makes available to its own accessories. Meta isn’t going to fix its smartphone challenges in that regard, but it is more motivation to do their own thing.

Horizon OS

Motivations, of course, aren’t enough: unlike AI models, where Meta wants a competitive model, but will achieve its strategic goals as long as a closed model doesn’t win, the company does actually need to win in the metaverse by controlling the most devices (assuming, of course, that the metaverse actually becomes a thing).

The first thing to note is that pursuing an Apple-like fully-integrated model would actually be bad for Meta’s larger goals, which, as a horizontal services company, is reaching the maximum number of people possible; there is a reason that the iPhone, by far the most dominant integrated product ever, still only has about 30% marketshare worldwide. Indeed, I would pushback on Zuckerberg’s continued insistence that Apple “won” mobile: they certainly did as far as revenue and profits go, but the nature of their winning is not the sort of winning that Meta should aspire to; from a horizontal services company perspective, Android “won” because it has the most marketshare.

Second, the best route to achieving that marketshare is exactly what Meta announced: licensing their operating system to device manufacturers who are not only motivated to sell devices, but also provide the necessary R&D and disparate channels to develop headsets for a far wider array of customers and use cases.

Third, Meta does have the opportunity to actually accomplish what pundits were sure would befall the iPhone: monopolize developer time and attention. A big reason why pundits were wrong about the iPhone, back when they were sure that it was doomed to disruption, was that they misunderstood history. I wrote in 2013’s The Truth About Windows Versus the Mac:

You’ve heard the phrase, “No one ever got fired for buying IBM.” That axiom in fact predates Microsoft or Apple, having originated during IBM’s System/360 heyday. But it had a powerful effect on the PC market. In the late 1970s and very early 1980s, a new breed of personal computers were appearing on the scene, including the Commodore, MITS Altair, Apple II, and more. Some employees were bringing them into the workplace, which major corporations found unacceptable, so IT departments asked IBM for something similar. After all, “No one ever got fired…”

IBM spun up a separate team in Florida to put together something they could sell IT departments. Pressed for time, the Florida team put together a minicomputer using mostly off-the shelf components; IBM’s RISC processors and the OS they had under development were technically superior, but Intel had a CISC processor for sale immediately, and a new company called Microsoft said their OS — DOS — could be ready in six months. For the sake of expediency, IBM decided to go with Intel and Microsoft.

The rest, as they say, is history. The demand from corporations for IBM PCs was overwhelming, and DOS — and applications written for it — became entrenched. By the time the Mac appeared in 1984, the die had long since been cast. Ultimately, it would take Microsoft a decade to approach the Mac’s ease-of-use, but Windows’ DOS underpinnings and associated application library meant the Microsoft position was secure regardless.

Evans is correct: the market today for mobile phones is completely different than the old market for PCs. And, so is Apple’s starting position; iOS was the first modern smartphone platform, and has always had the app advantage. Neither was the case in PCs. The Mac didn’t lose to Windows; it failed to challenge an already-entrenched DOS. The lessons that can be drawn are minimal.

The headset market is the opposite of the smartphone market: Meta has been at this for a while, and has a much larger developer base than Apple does, particularly in terms of games. It’s not overwhelming like Microsoft’s DOS advantage already was, to be sure, and I’m certainly not counting out Apple, but this also isn’t the smartphone era when Apple had a multi-year head start.

To that end, it’s notable that Meta isn’t just licensing Horizon OS, it is also opening up the allowable app model. From the Oculus Developer blog:

We’re also significantly changing the way we manage the Meta Horizon Store. We’re shifting our model from two independent surfaces, Store and App Lab, to a single, unified, open storefront. This shift will happen in stages, first by making many App Lab titles available in a dedicated section of the Store, which will expand the opportunity for those titles to reach their audiences. In the future, new titles submitted will go directly to the Store, and App Lab will no longer be a separate distribution channel. All titles will still need to meet basic technical, content, and privacy requirements to publish to the Store. Titles are reviewed at submission and may be re-reviewed as they scale to more people. Like App Lab today, all titles that meet these requirements will be published.

App Lab apps are a middle ground between side-loading (which Horizon OS supports) and normal app store distribution: developers get the benefit of an App Store (easy install, upgrades, etc.) without having to go through full App Review; clear a basic bar and your app will be published. This allows for more experimentation.

What it does not allow for is new business models: App Lab apps, if they monetize, still must use Horizon OS’s in-app payment system. To that end, I think that Meta should consider going even further, and offering up a truly open store: granted, this would reduce the long-run monetization potential of Horizon OS, but it seems to me like that would be an excellent problem to have, given it would mean there was a long-run to monetize in the first place.

The Meaning of Open

This remaining limitation does get at the rather fuzzy meaning of “open”: in the case of Horizon OS, Meta means a licensing model for its OS and more freedom for developers relative to Apple; in the case of Llama Meta means open weights and making models into a commodity; in the case of data centers Meta means open specifications, and in the case of projects like React and PyTorch Meta means true open source code.

Meta, in other words, is not taking some sort of philosophical stand: rather, they are clear-eyed about what their market is (time and attention), and their core differentiation (horizontal services that capture more time and attention than anyone); everything that matters in pursuit of that market and maintenance of that differentiation is worth investing in, and if “openness” means that investment goes further or performs better or handicaps a competitor, then Meta will be open.

I wrote a follow-up to this Article in this Daily Update.