The Coming AI Crash: Faking Intelligence Versus Real Thinking

 The Coming AI Crash: Faking Intelligence Versus Real Thinking

Yesterday's mini-crash of AI stocks is not surprising. AI related stocks have been priced to perfection reflecting the view that AI is a revolution on par with the industrial revolution or the transition from horse to combustion engine for transportation. But the crash is just the first symptom of a greater problem, namely that the technology does not live up to the hype. The large language models are impenetrable black boxes due to their nature; they are nonlinear statistical models with huge number of parameters. It is the huge number of parameters that make them power hungry beasts of requiring GPUs to estimate them. For details on the crash itself, read https://thetechcapital.com/tech-stocks-tumble-as-deepseek-triggers-1-trillion-market-crash/. 

The term 'training an AI model' means exactly the same thing as estimation of a large nonlinear statistical model like the one below. I am a time series statistical analyst by training so these models are familiar to me. The large language models are gigantic versions of the relatively simple neural net example below. Because they are Black Boxes no one knows how or why they work or their weaknesses or strengths. Nonlinear models with lots of parameters can approximate any mathematical function with arbitrarily high precision. This makes them great for summarizing data. So estimate a model on all known human language references to lemon trees can provide a useful summary of what the human race knows about lemon trees. 

But as people have discovered, the models fail to solve simple riddles that they have not encountered because they are just data summaries. ChatGPT has failed to solve this riddle: Sally has two sisters. Her mom is Florence. How many daughters does Florence have? 😃 So my concern is that these AI models into which vast sums are being poured are not really thinking machines, but just statistical models good as sniffing out relationships or finding patterns. Another example of a riddle stumping a massively expensive AI program like ChatGPT is below on the left hand side. 

If I am correct, then an excessive amount of money is being poured into this line of research and that means a crash is inevitable including related infrastructure like data centres and even some subsea cable investments. Large language models have limited reasoning power and in fact so little that they appear to rather ape intelligence as opposed to demonstrate real intelligence. Intelligence involves being able to infer new relationships. |


Simple Neural Net AI Model

Example of a Riddle That ChatGPT Could Not Solve


Comments

Popular posts from this blog

Breaking Story: Facebook Building Subsea Cable That Will Encompass The World

Facebook's Semi-Secret W Cable

How To Calculate An IRU Price For a 100G Wavelength