Is AI Smarter than a 1-Year-Old? Can You Train AI with AI?

Sharing is Caring!

by Mike Shedlock

The answers, as most will quicky access, are no and no. So, how much of the AI hype is real?

AI Can’t Teach AI New Tricks

Wall Street Journal writer Andy Kessler has a great observation today: AI Can’t Teach AI New Tricks

OpenAI just raised $6.6 billion, the largest venture-capital funding of all time, at a $157 billion valuation. Oh, and the company will lose $5 billion this year and is projecting $44 billion in losses through 2029.

We are bombarded with breathless press releases. Anthropic CEO Dario Amodei predicts that “powerful AI” will surpass human intelligence in many areas by 2026. OpenAI claims its latest models are “designed to spend more time thinking before they respond. They can reason through complex tasks and solve harder problems.” Thinking? Reasoning? Will AI become humanlike? Conscious?

I hate to be the one to throw it, but here’s some cold water on the AI hype cycle:

Moravec’s paradox: Babies are smarter than AI. In 1988 robotics researcher Hans Moravec noted that “It is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.” Most innate skills are built into our DNA, and many of them are unconscious.

AI has a long way to go. Last week, Apple AI researchers seemed to agree, noting that “current LLMs are not capable of genuine logical reasoning; instead, they attempt to replicate the reasoning steps observed in their training data.”

Linguistic apocalypse paradox: As I’ve noted before, AI smarts come from human logic embedded between words and sentences. Large language models need human words as input to become more advanced. But some researchers believe we’ll run out of written words to train models sometime between 2026 and 2032.

Remember, you can’t train AI models on AI-generated prose. That leads to what’s known as model collapse. Output becomes gibberish.

Current models train on 30 trillion human words. To be Moore’s Law-like, does this scale 1000 times over a decade to 30 quadrillion tokens? Are there even that many words written? Writers, you better get crackin’.

Scaling paradox: Early indications suggest large language models may follow so-called power-law curves. Google researcher Dagang Wei thinks that “increasing model size, dataset size, or computation can lead to significant performance boosts, but with diminishing returns as you scale up.” Yes, large language models could hit a wall.

Spending paradox: Data centers currently have an almost insatiable demand for graphics processing units to power AI training. Nvidia generated $30 billion in revenue last quarter and expectations are for $177 billion in revenue in 2025 and $207 billion in 2026. But venture capitalist David Cahn of Sequoia Capital wonders if this is sustainable. He thinks the AI industry needs to see $600 billion in revenue to pay back all the AI infrastructure spending so far. Industry leader OpenAI expects $3.7 billion in revenue this year, $12 billion next year, and forecasts $100 billion, but not until 2029. It could take a decade of growth to justify today’s spending on GPU chips.

Goldman Sachs’s head of research wrote a report, “GenAI: Too much spend, too little benefit?” He was being nice with the question mark. Nobel laureate and Massachusetts Institute of Technology economist Daron Acemoglu thinks AI can perform only 5% of jobs and tells Bloomberg, “A lot of money is going to get wasted.” Add to that the cost of power—a ChatGPT query uses nearly 10 times the electricity of a Google search. Microsoft is firing up one of the nuclear reactors at Three Mile Island to accommodate rising power needs. Yikes.

I’m convinced AI will transform our lives for the better, but it isn’t a straight line up.

DotCom Bust Comparison

See also  Germany : Train Conductors told to "Skip" foreigners during ticket check

That may be one of the best researched and link-annotated short articles ever.

It reminds me of all the click-counting and ad revenue hype in the DotCom bust.

Ad revenue from clicks eventually came, but from Google in 2004, not Gemstar in 2000. Does anyone even remember Gemstar (GMST)?

During the technology boom of the late 1990s, investors fell in love with Gemstar-TV Guide International Inc. The Los Angeles-based company seemed to hold the key to the futuristic world of interactive television, just as Internet portals like Yahoo became the gateway to the Web.

Gemstar’s patented on-screen program-guide technology was expected to be vital to viewers navigating their way around an increasing array of TV channels and cable services.

The stock fell hard after the dot-com crash, from a high of $107.43 in March 2000 to a low of $2.36 in September 2002.

We also had Excite@Home, Lycos, Global Crossing, Enron, and countless names I don’t even remember, most of which went bankrupt.

BottomLineLaw discusses Silicon Valley After the Dot-Com Crash.

The Excite@Home headquarters sat empty for five years before Stanford finally bought it in 2007 and turned it into an extension of its outpatient medical clinic.

Irrational Exuberance

In 1996, then Fed Chair Alan Greenspan was in on it. Greenspan correctly warned of “Irrational Exuberance” in a televised speech.

“Clearly, sustained low inflation implies less uncertainty about the future, and lower risk premiums imply higher prices of stocks and other earning assets. We can see that in the inverse relationship exhibited by price/earnings ratios and the rate of inflation in the past. But how do we know when irrational exuberance has unduly escalated asset values, which then become subject to unexpected and prolonged contractions as they have in Japan over the past decade?”

Greenspan Becomes a True Believer

However, by 2000, Greenspan became a true believer.

Fed minutes show that right at the peak of the DotCom bubble and a big stock market collapse, Greenspan’s big worry was the economy was overheating due to the productivity miracle.

Worried about a Y2K crash, the Fed pumped the economy like mad accelerating the bubble.

The Fed’s Role in the DOTCOM Bubble

Acting in misguided fear of a Y2K calamity, the Fed stepped on the gas with unnecessary liquidity, having previously stepped on the gas to bail out Long Term Capital Management in 1998.

And after warning about irrational exuberance in 1996, Greenspan embraced the “productivity miracle” and “dotcom revolution” in 1999. Mid-summer of 2000 Greenspan believed his own nonsense, and right as the dotcom bubble started to burst, he started to worry about inflation risks.

The May 16 2000 FOMC minutes prove this.

The members saw substantial risks of rising pressures on labor and other resources and of higher inflation, and they agreed that the tightening action would help bring the growth of aggregate demand into better alignment with the sustainable expansion of aggregate supply. They also noted that even with this additional firming the risks were still weighted mainly in the direction of rising inflation pressures and that more tightening might be needed.

Looking ahead, further rapid growth was expected in spending for business equipment and software. … Even after today’s tightening action the members believed the risks would remain tilted toward rising inflation.

How could Greenspan have possibly been more wrong? Over the next 18 months CPI dropped from 3.1% to 1.1%, the US went into a recession and capex spending fell off the cliff.

Alan Greenspan Right on Time

On November 2, 2019, I commented Good Reason to Expect Recession: Greenspan Doesn’t

I think we all know what happened three months later.

On August 19, I commented “Zero Has No Meaning” Says Greenspan: I Disagree, So Does Gold

Former Federal Reserve Chairman Alan Greenspan says he wouldn’t be surprised if U.S. bond yields turn negative. And if they do, it’s not that big of a deal.

No Greenspan, Conditions are NOT Like 1998

Flashback September 11, 2007 No Greenspan, Conditions are NOT Like 1998

WSJ: Bubbles can’t be defused through incremental adjustments in interest rates, Mr. Greenspan suggested. The Fed doubled interest rates in 1994-95 and “stopped the nascent stock-market boom,” but when stopped, stocks took off again. “We tried to do it again in 1997,” when the Fed raised rates a quarter of a percentage point, and “the same phenomenon occurred.” “The human race has never found a way to confront bubbles,” he said.

Mish: The truth of the matter is the Fed (and in particular Greenspan) has embraced every bubble in history, adding fuel to every one of them. Let’s consider the last two bubbles. ….

I count 26 links in this post. Undoubtedly a record.

Returning to the top …

“OpenAI just raised $6.6 billion, the largest venture-capital funding of all time, at a $157 billion valuation. Oh, and the company will lose $5 billion this year and is projecting $44 billion in losses through 2029.”

How much is hype and how much is real?

 

Views: 105

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.