Switching it up from grand theory to the nitty-gritty of business models, once you're committed to spending eye-watering amounts of money like the hyperscalers are, it makes sense to think of efficiencies that can reduce your spend.
One obvious solution: to own the entire stack from chips to models to data centers to distribution. Google is doing that quite well, as today's Daily Planet argues and that stack-ownership might help it dominate this race in the long run, despite being written off not too long ago.
The battle between Google and Microsoft in the AI business revolves around two contrasting strategies. Microsoft has embraced a partnership model, collaborating closely with OpenAI, the creator of ChatGPT. This approach allows Microsoft to leverage OpenAI's cutting-edge AI models while providing flexible cloud computing resources. By loosening its control over OpenAI and enabling it to use multiple cloud partners, Microsoft has fostered innovation and agility. This mix-and-match strategy also involves cooperation with chipmakers like Nvidia, who supply the GPUs powering AI workloads.
In contrast, Google has pursued a vertically integrated model, designing its own hardware - tensor-processing units (TPUs) - and developing AI models in-house through Google DeepMind. This end-to-end control aims to optimize efficiency and performance, with TPUs offering higher energy efficiency and lower costs per AI query. Google's approach has gained momentum, boosting its market value by $1 trillion in four months and winning over rivals. Its AI enhancements have also strengthened its core search business and cloud services.
Who's going to win? We don't know, but tomorrow, I will cover how BYD, the Chinese EV company is doing the same in a hypercompetitive market.