One of the most paradoxical aspects of AI is that while it is hailed as the route to abundance, the most important financial outcomes have been about scarcity. The first and most obvious example has been Nvidia, whose valuation has skyrocketed while demand for its chips continues to outpace supply:
Another scarce resource that has come to the forefront over the last few months is AI talent; the people who are actually building and scaling the models are suddenly being paid more than professional athletes, and it makes sense:
The potential financial upside from “winning” in AI are enormous
Outputs are somewhat measurable
The work-to-be-done is the same across the various companies bidding for talent
It’s that last point that is fairly unique in tech history. While great programmers have always been in high demand, and there have been periods of intense competition in specific product spaces, over the past few decades tech companies have been franchises, wherein their market niches have been fairly differentiated: Google and search, Amazon and e-commerce, Meta and social media, Microsoft and business applications, Apple and devices, etc. This reality meant that the company mattered more than any one person, putting a cap on individual contributor salaries.
AI, at least to this point, is different: in the long run it seems likely that there will be dominant product companies in various niches, but as long as the game is foundational models, then everyone is in fact playing the same game, which elevates the bargaining power of the best players. It follows, then, that the team they play for is the team that pays the most, through some combination of money and mission; by extension, the teams that are destined to lose are the ones who can’t or won’t offer enough of either.
Apple’s Reluctance
It’s that last point I’m interested in; I’m not in position to judge the value of any of the players changing teams, but the teams are worth examining. Consider Meta and Apple and the latest free agent signing; from Bloomberg:
Apple Inc.’s top executive in charge of artificial intelligence models is leaving for Meta Platforms Inc., another setback in the iPhone maker’s struggling AI efforts. Ruoming Pang, a distinguished engineer and manager in charge of the company’s Apple foundation models team, is departing, according to people with knowledge of the matter. Pang, who joined Apple from Alphabet Inc. in 2021, is the latest big hire for Meta’s new superintelligence group, said the people, who declined to be named discussing unannounced personnel moves.
To secure Pang, Meta offered a package worth tens of millions of dollars per year, the people said. Meta Chief Executive Officer Mark Zuckerberg has been on a hiring spree, bringing on major AI leaders including Scale AI’s Alexandr Wang, startup founder Daniel Gross and former GitHub CEO Nat Friedman with high compensation. Meta has also hired Yuanzhi Li, a researcher from OpenAI, and Anton Bakhtin, who worked on Claude at Anthropic PBC, according to other people with knowledge of the matter. Last month, it hired a slew of other OpenAI researchers. Meta, later on Monday, confirmed it is hiring Pang. Apple, Pang, OpenAI and Anthropic didn’t respond to requests for comment.
That Apple is losing AI researchers is a surprise only in that they had researchers worth hiring; after all, this is the company who already implicitly signaled its AI reluctance in terms of that other scarce resource: Nvidia chips. Again from Bloomberg:
Former Chief Financial Officer Luca Maestri’s conservative stance on buying GPUs, the specialized circuits essential to AI, hasn’t aged well either. Under Cook, Apple has used its market dominance and cash hoard to shape global supply chains for everything from semiconductors to the glass for smartphone screens. But demand for GPUs ended up overwhelming supply, and the company’s decision to buy them slowly — which was in line with its usual practice for emerging technologies it isn’t fully sold on — ended up backfiring. Apple watched as rivals such as Amazon and Microsoft Corp. bought much of the world’s supply. Fewer GPUs meant Apple’s AI models were trained all the more slowly. “You can’t magically summon up more GPUs when the competitors have already snapped them all up,” says someone on the AI team.
It may seem puzzling that the company that in its 2024 fiscal year generated $118 billion in free cash flow would be so cheap, but Apple’s reluctance makes sense from two perspectives.
First, the potential impact of AI on Apple’s business prospects, at least in the short term, are fairly small: we still need devices on which to access AI, and Apple continues to own the high end of devices (there is, of course, long-term concern about AI obviating the need for a smartphone, or meaningfully differentiating an alternative platform like Android). That significantly reduces the financial motivation for Apple to outspend other companies in terms of both GPUs and researchers.
Second, AI, at least some of the more fantastical visions painted by companies like Anthropic, is arguably counter to Apple’s entire ethos as a company.
Tech’s Two Philosophies
It was AI, at least the pre-LLM version of it, that inspired me in 2018 to write about Tech’s Two Philosophies; one was represented by Google and Facebook (now Meta):
In Google’s view, computers help you get things done — and save you time — by doing things for you. Duplex was the most impressive example — a computer talking on the phone for you — but the general concept applied to many of Google’s other demonstrations, particularly those predicated on AI: Google Photos will not only sort and tag your photos, but now propose specific edits; Google News will find your news for you, and Maps will find you new restaurants and shops in your neighborhood. And, appropriately enough, the keynote closed with a presentation from Waymo, which will drive you…
Zuckerberg, as so often seems to be the case with Facebook, comes across as a somewhat more fervent and definitely more creepy version of Google: not only does Facebook want to do things for you, it wants to do things its chief executive explicitly says would not be done otherwise. The Messianic fervor that seems to have overtaken Zuckerberg in the last year, though, simply means that Facebook has adopted a more extreme version of the same philosophy that guides Google: computers doing things for people.
The other philosophy was represented by Apple and Microsoft:
Earlier this week, while delivering Microsoft’s Build conference keynote, CEO Satya Nadella struck a very different tone…This is technology’s second philosophy, and it is orthogonal to the other: the expectation is not that the computer does your work for you, but rather that the computer enables you to do your work better and more efficiently. And, with this philosophy, comes a different take on responsibility. Pichai, in the opening of Google’s keynote, acknowledged that “we feel a deep sense of responsibility to get this right”, but inherent in that statement is the centrality of Google generally and the direct culpability of its managers. Nadella, on the other hand, insists that responsibility lies with the tech industry collectively, and all of us who seek to leverage it individually.
This second philosophy, that computers are an aid to humans, not their replacement, is the older of the two; its greatest proponent — prophet, if you will — was Microsoft’s greatest rival, and his analogy of choice was, coincidentally enough, about transportation as well. Not a car, but a bicycle:
I remember reading an article when I was about 12 years old, I think it might have been in Scientific American, where they measured the efficiency of locomotion for all these species on planet earth, how many kilocalories did they expand to get from point A to point B, and the condor came in the top of the list, surpassed everything else, and humans came in about a third of the way down the list, which was not such a great showing for the crown of creation.
But somebody there had the imagination to test the efficiency of a human riding a bicycle, and a human riding a bicycle blew away the condor all the way off the top of the list. And it made a really big impression on me that we humans are tool builders, and that we can fashion tools that amplify these inherent abilities that we have to spectacular magnitudes. And so for me a computer has always been a bicycle of the mind, something that takes us far beyond our inherent abilities. I think we’re just at the early stages of this tool, very early stages, and we’ve come only a very short distance, and it’s still in its formation, but already we’ve seen enormous changes. I think that’s nothing compared to what’s coming in the next 100 years.
We are approximately forty years on from that clip, and Steve Jobs’ prediction that enormous changes were still to come is obviously prescient: mobile and the Internet have completely transformed the world, and AI is poised to make those impacts look like peanuts. What I’m interested in in the context of this Article, however, is the interplay between business opportunity — or risk — and philosophy. Apple’s position is here:
In this view the company’s conservatism makes sense: Apple doesn’t quite see the upside of AI for their business (and isn’t overly concerned about the downsides), and its bias towards tools means that AI apps on iPhones are sufficient; Apple might be an increasingly frustrating platform steward, but they are at their core a platform company, and apps on their platform are delivering Apple users AI tools.
This same framework also explains Meta’s aggressiveness. First, the opportunity is huge, as I documented last fall in Meta’s AI Abundance (and, for good measure, there is risk as well, as time — the ultimate scarcity for an advertising-based business — is spent using AI). Second, Meta’s philosophy is that computers do things for you:
Given this graph, is it any surprise that Meta hired away Apple’s top AI talent?
I’m Feeling Lucky
Another way to think about how companies are approaching AI is through the late Professor Clayton Christensen’s discussion around sustaining versus disruptive innovation. From an Update last month after the news of Meta’s hiring spree first started making waves:
The other reason to believe in Meta versus Google comes down to the difference between disruptive and sustaining innovations. The late Professor Clayton Christensen described the difference in The Innovator’s Dilemma:
Most new technologies foster improved product performance. I call these sustaining technologies. Some sustaining technologies can be discontinuous or radical in character, while others are of an incremental nature. What all sustaining technologies have in common is that they improve the performance of established products, along the dimensions of performance that mainstream customers in major markets have historically valued. Most technological advances in a given industry are sustaining in character. An important finding revealed in this book is that rarely have even the most radically difficult sustaining technologies precipitated the failure of leading firms.
Occasionally, however, disruptive technologies emerge: innovations that result in worse product performance, at least in the near-term. Ironically, in each of the instances studied in this book, it was disruptive technology that precipitated the leading firms’ failure. Disruptive technologies bring to a market a very different value proposition than had been available previously. Generally, disruptive technologies underperform established products in mainstream markets. But they have other features that a few fringe (and generally new) customers value. Products based on disruptive technologies are typically cheaper, simpler, smaller, and, frequently, more convenient to use.
The question of whether generative AI is a sustaining or disruptive innovation for Google remains uncertain two years after I raised it. Obviously Google has tremendous AI capabilities both in terms of infrastructure and research, and generative AI is a sustaining innovation for its display advertising business and its cloud business; at the same time, the long-term questions around search monetization remain as pertinent as ever.
Meta, however, does not have a search business to potentially disrupt, and a whole host of ways to leverage generative AI across its business; for Zuckerberg and company I think that AI is absolutely a sustaining technology, which is why it ultimately makes sense to spend whatever is necessary to get the company moving in the right direction.
The problem with this analysis is the Google part: how do you square the idea that AI is disruptive to Google with the fact that they are investing just has heavily as everyone else, and in fact started far earlier than everyone else? I think the answer goes back to Google’s founding, and the “I’m Feeling Lucky” button:
While that button is now gone from Google.com, I don’t think it was an accident that it persisted long after it was even usable (instant search results meant that by 2010 you didn’t even have a chance to click it); “I’m Feeling Lucky” was a statement of purpose. From 2016’s Google and the Limits of Strategy:
In yesterday’s keynote, Google CEO Sundar Pichai, after a recounting of tech history that emphasized the PC-Web-Mobile epochs I described in late 2014, declared that we are moving from a mobile-first world to an AI-first one; that was the context for the introduction of the Google Assistant.
It was a year prior to the aforementioned iOS 6 that Apple first introduced the idea of an assistant in the guise of Siri; for the first time you could (theoretically) compute by voice. It didn’t work very well at first (arguably it still doesn’t), but the implications for computing generally and Google specifically were profound: voice interaction both expanded where computing could be done, from situations in which you could devote your eyes and hands to your device to effectively everywhere, even as it constrained what you could do. An assistant has to be far more proactive than, for example, a search results page; it’s not enough to present possible answers: rather, an assistant needs to give the right answer.
This is a welcome shift for Google the technology; from the beginning the search engine has included an “I’m Feeling Lucky” button, so confident was Google founder Larry Page that the search engine could deliver you the exact result you wanted, and while yesterday’s Google Assistant demos were canned, the results, particularly when it came to contextual awareness, were far more impressive than the other assistants on the market. More broadly, few dispute that Google is a clear leader when it comes to the artificial intelligence and machine learning that underlie their assistant.
The problem — apparent even then — was the conflict with Google’s business model:
A business, though, is about more than technology, and Google has two significant shortcomings when it comes to assistants in particular. First, as I explained after this year’s Google I/O, the company has a go-to-market gap: assistants are only useful if they are available, which in the case of hundreds of millions of iOS users means downloading and using a separate app (or building the sort of experience that, like Facebook, users will willingly spend extensive amounts of time in).
Secondly, though, Google has a business-model problem: the “I’m Feeling Lucky” button guaranteed that the search in question would not make Google any money. After all, if a user doesn’t have to choose from search results, said user also doesn’t have the opportunity to click an ad, thus choosing the winner of the competition Google created between its advertisers for user attention. Google Assistant has the exact same problem: where do the ads go?
What I articulated in that Article was Google’s position on this graph:
AI is the ultimate manifestation of “I’m Feeling Lucky”; Google has been pursuing AI because that is why Page and Brin started the company in the first place; business models matter, but they aren’t dispositive, and while that may mean short-term difficulties for Google, it is a reason to be optimistic that the company will figure out AI anyways.
Microsoft, OpenAI, and Anthropic
Frameworks like this are useful, but not fully explanatory; I think this particular one goes a long way towards contextualizing the actions of Apple, Meta, and Google, but is much more speculative for some other relevant AI players. Consider Microsoft, which I would place here:
Microsoft doesn’t have any foundational models of note, but has invested heavily in OpenAI; its most important AI product are its various Copilots, which are indeed a bet on the “tool” philosophy. The question, as I laid out last year in Enterprise Philosophy and the First Wave of AI, is whether rank-and-file employees want Microsoft’s tools:the per-seat licensing model becomes threatened by AI eliminating jobs” rel=”footnote”>1
Notice, though, how this aligned with the Apple and Microsoft philosophy of building tools: tools are meant to be used, but they take volition to maximize their utility. This, I think, is a challenge when it comes to Copilot usage: even before Copilot came out employees with initiative were figuring out how to use other AI tools to do their work more effectively. The idea of Copilot is that you can have an even better AI tool — thanks to the fact it has integrated the information in the “Microsoft Graph” — and make it widely available to your workforce to make that workforce more productive.
To put it another way, the real challenge for Copilot is that it is a change management problem: it’s one thing to charge $30/month on a per-seat basis to make an amazing new way to work available to all of your employees; it’s another thing entirely — a much more difficult thing — to get all of your employees to change the way they work in order to benefit from your investment, and to make Copilot Pages the “new artifact for the AI age”, in line with the spreadsheet in the personal computer age.
This tension explains the anecdotes in this Bloomberg article last month:
OpenAI’s nascent strength in the enterprise market is giving its partner and biggest investor indigestion. Microsoft salespeople describe being caught flatfooted at a time when they’re under pressure to get Copilot into as many customers’ hands as possible. The behind-the-scenes dogfight is complicating an already fraught relationship between Microsoft and OpenAI…It’s unclear whether OpenAI’s momentum with corporations will continue, but the company recently said it has 3 million paying business users, a 50% jump from just a few months earlier. A Microsoft spokesperson said Copilot is used by 70% of the Fortune 500 and paid users have tripled compared with this time last year…
This story is based on conversations with more than two dozen customers and salespeople, many of them Microsoft employees. Most of these people asked not to be named in order to speak candidly about the competition between Microsoft and OpenAI. Both companies are essentially pitching the same thing: AI assistants that can handle onerous tasks — researching and writing; analyzing data — potentially letting office workers focus on thornier challenges. Since both chatbots are largely based on the same OpenAI models, Microsoft’s salesforce has struggled to differentiate Copilot from the much better-known ChatGPT, according to people familiar with the situation.
As long as AI usage relies on employee volition, ChatGPT has the advantage; what is interesting about this observation, however, is that it shows that OpenAI is actually in the same position as Microsoft:
This, by extension, explains why Anthropic is different; the other leading independent foundational lab is clearly focused on agents, not chatbots, i.e. AI that does stuff for you, instead of a tool. Consider the contrast between Cursor and Claude Code: Cursor is an integrated development environment (IDE) that provides the best possible UI for AI-augmented programming; Claude Code, on the other hand, barely bothers with a UI at all. It runs in the terminal, which people put up with because it is the best at one-shotting outputs; this X thread was illuminating:
More generally, I wrote in an Update after the release of Claude 4, which was heavily focused on agentic workloads:
This, by extension, means that Anthropic’s goal is what I wrote about in last fall’s Enterprise Philosophy and the First Wave of AI:
Computing didn’t start with the personal computer, but rather with the replacement of the back office. Or, to put it in rather more dire terms, the initial value in computing wasn’t created by helping Boomers do their job more efficiently, but rather by replacing entire swathes of them completely…Agents aren’t copilots; they are replacements. They do work in place of humans — think call centers and the like, to start — and they have all of the advantages of software: always available, and scalable up-and-down with demand…
Benioff isn’t talking about making employees more productive, but rather companies; the verb that applies to employees is “augmented”, which sounds much nicer than “replaced”; the ultimate goal is stated as well: business results. That right there is tech’s third philosophy: improving the bottom line for large enterprises.
Notice how well this framing applies to the mainframe wave of computing: accounting and ERP software made companies more productive and drove positive business results; the employees that were “augmented” were managers who got far more accurate reports much more quickly, while the employees who used to do that work were replaced. Critically, the decision about whether or not to make this change did not depend on rank-and-file employees changing how they worked, but for executives to decide to take the plunge.
This strikes me as a very worthwhile goal, at least from a business perspective. OpenAI is busy owning the consumer space, while Google and its best-in-class infrastructure and leading models struggles with product; Anthropic’s task is to build the best agent product in the world, including not just state-of-the-art models but all of the deterministic computing scaffolding that actually makes them replacement-level workers. After all, Anthropic’s API pricing may look expensive relative to Google, but it looks very cheap relative to a human salary.
That means that Anthropic shares the upper-right quadrant with Meta:
Again, this is just one framework; there are others. Moreover, the boundaries are fuzzy. OpenAI is working on agentic workloads, for example, and the hyperscalers all benefit from more AI usage, whether user- or agent-driven; Google, meanwhile, is rapidly evolving Search to incorporate generative AI.
At the same time, to go back to the talent question, I don’t think it’s a surprise that Meta appears to be picking off more researchers from OpenAI than from Anthropic: my suspicion is that to the extent mission is a motivator the more likely an AI researcher is to be enticed by the idea of computers doing everything, instead of merely augmenting humans. And, by extension, the incumbent tool-makers may have no choice but to partner with the true believers.