Early Signs Of An AI Data Center Slowdown & Probable Crash

A few weeks ago Microsoft's CEO warned that much of the AI enthusiasm amounted to hype. Moreover, he is skeptical about general artificial intelligence. He has a point. Microsoft invested $12 billion in OpenAI, but has seen little revenue from integrating its models into the AI Azure services offered on its Cloud platform. In addition, Microsoft has abandoned several site options (lease or land purchase) for new data centers. Just a coincidence? Probably not. Total global AI data center infrastructure spending planned and announced easily exceeds a trillion dollars. Yet Cloud revenues are slowing across providers, not accelerating. AWS has gone from 30% to 40% annual growth down to 10% to 20% in recent years. A similar slowdown has hit Microsoft. Yet at the same time as global revenues are slowing, AI capex has ramped up. There is a great likelihood that the result is a crash with abandoned assets and cancelled projects. Demand is not materializing plus there is a severe shortage of power everywhere as well as limited suitable sites in regions like Southeast Asia.

At the heart of the problem is the limited nature of current AI. ChatGPT is a large language model. This really means it is a nonlinear statistical model with a huge of parameters to be estimated (trained) using examples of human language. ChatGPT-4 has 1.8 trillion parameters. It mimics human conversation quite well, but it often fails to solve simple riddles suggesting it really can't reason. Sally has one sister, Alice. How many daughters does Sally's mom have? Most children can handle this, but incredibly expensive LLMs regularly fail this riddle. There are no logic rules build into this so-called new AI. A large language model like ChatGPT has no logic or reasoning structure. Google's CEO has admitted that LLMs are essentially a statistical black box. Nor do they exhibit a real ability to distinguish truth from falsity. If a LLM was trained on Donald Trump most of its assertions would be false. In contrast, consider highly successful chess engine software. The best human chess players are no match for the best chess engines. Indeed, chess engines have their own tournaments now and the level of play and ingenuity far exceeds our mortal brains. But chess engines are not statistical black boxes. They are logic machines using algorithms that have the advantage of clearly defined parameter spaces and rules. (64 squares and the permissible moves of the pieces). 

I expect 2025 will be the year when AI investment slows with signs of a crash with distressed assets and companies emerging in the autumn. Below is an example of these fancy new AI models that OpenAI's CEO proclaims will lead to Transcendence in a year or two. LLMs are simply bigger versions.

Table of Nonlinear Statistical Models Used in AI


Comments

Popular posts from this blog

Breaking Story: Facebook Building Subsea Cable That Will Encompass The World

Facebook's Semi-Secret W Cable

How To Calculate An IRU Price For a 100G Wavelength