Shares of Nvidia closed up 2.3% at an all-time high of $504.20 on Monday. The record comes ahead of the company’s fiscal third-quarter results on Tuesday, when analysts are expecting to see revenue growth of over 170%.
If that’s not astounding enough, the company’s forecast for the fiscal fourth quarter, according to LSEG estimates, is likely to show an even bigger number: almost 200% growth.
Heading into the Thanksgiving holiday, Wall Street will be closely scrutinizing the company that’s been at the heart of this year’s artificial intelligence boom.
Nvidia’s stock price has ballooned 245% in 2023, far outpacing any other member of the S&P 500. Its market cap now sits at $1.2 trillion, well above Meta or Tesla. Any indication on the earnings call that generative AI enthusiasm is cooling, or that some big customers are moving over to AMD’s processors, or that China restrictions are having a detrimental effect on the business could spell trouble for a stock that’s been on such a tear.
“Expectations are high leading into NVDA’s FQ3’24 earnings call on Nov-21,” Bank of America analysts wrote in a report last week. They have a buy rating on the stock and said they “expect a beat/raise.”
However, they flagged China restrictions and competitive concerns as two issues that will capture investor attention. In particular, the emergence of AMD in the generative AI market presents a new dynamic for Nvidia, which has mostly had the AI graphics processing unit (GPU) market to itself.
AMD CEO Lisa Su said late last month that the company expects GPU revenue of about $400 million during the fourth quarter, and more than $2 billion in 2024. The company said in June that the MI300X, its most advanced GPU for AI, would start shipping to some customers this year.
Nvidia is still by far the market leader in GPUs for AI, but high prices are an issue.
“NVDA needs to forcefully counter the narrative its products are too expensive for generative AI inference,” the Bank of America analysts wrote.
Last week, Nvidia unveiled the H200, a GPU designed for training and deploying the kinds of AI models that are powering the generative AI explosion, allowing companies to develop smarter chatbots and convert simple text into creative graphical designs.
The new GPU is an upgrade from the H100, the chip OpenAI used to train its most-advanced large language model, GPT-4 Turbo. H100 chips cost between $25,000 and $40,000, according to an estimate from Raymond James, and thousands of them working together are needed to create the biggest models in a process called “training.”
The H100 chips are part of Nvidia’s data center group, which saw revenue in the fiscal second quarter surge 171% to $10.32 billion. That accounted for about three-quarters of Nvidia’s total revenue.
For the fiscal third quarter, analysts expect data center growth to almost quadruple to $13.02 billion from $3.83 billion a year earlier, according to FactSet. Total revenue is projected to rise 172% to $16.2 billion, according to analysts surveyed by LSEG, formerly Refinitiv.
Based on current estimates, growth will peak in the fiscal fourth quarter at about 195%, LSEG estimates show. Expansion will remain robust throughout 2024 but is expected to decelerate each quarter of the year.
Executives can expect to field questions on the earnings call related to the massive shake-up at OpenAI, the creator of the chatbot ChatGPT, which was a major catalyst of Nvidia’s growth this year. On Friday, OpenAI’s board announced the sudden firing of CEO Sam Altman over disputes about the company’s speed of product development and where it’s focusing its efforts.
OpenAI is a big buyer of Nvidia’s GPUs, as is Microsoft, OpenAI’s top backer. Following a chaotic weekend, OpenAI on Sunday night said former Twitch CEO Emmett Shear would be leading the company on an interim basis, and soon after that Microsoft CEO Satya Nadella said Altman and ousted OpenAI Chairman Greg Brockman would be joining to lead a new advanced AI research team.
Nvidia investors have so far brushed off China-related concerns despite the potential significance to the company’s business. The H100 and A100 AI chips were the first to be hit by new U.S. restrictions last year that aimed to curb sales to China. Nvidia said in September 2022 that the U.S. government would still allow it to develop the H100 in China, which accounts for 20% to 25% of its data center business.
The company has reportedly found a way to keep selling into the world’s second-biggest economy while keeping compliant with U.S. rules. The company is set to deliver three new chips, based on the H100, to Chinese manufacturers, Chinese financial media Cailian Press reported last week, citing sources.
Nvidia has historically avoided providing annual guidance, preferring to look ahead only to the next quarter. But given how much money investors have poured into the company this year and how little else there is for them to follow this week, they’ll be listening closely to CEO Jensen Huang’s tone on the conference call for any sign that the buzz in generative AI may be wearing off.