The launch of three new Chinese generative artificial intelligence models sent shockwaves through the tech sector. The share price of a basket of AI-related stocks in the US dropped 10% in the first two days of this week.
The development highlights the sheer amount of capital that’s been invested in AI by US technology giants so far, as well as the investment that will be needed to scale the technology going forward, according to Goldman Sachs Research. It remains to be seen exactly how some of the new models were trained, with emerging questions on sources of data. But there are also signs that the developments in China could lower the cost of running chatbot apps, which can be used for everything from coding software to writing sonnets, and make them more widely available.
“What’s clear to us is that lowering the cost of AI models will drive much higher adoption, as it would make the models much cheaper to use in future,” says Ronald Keung, the head of Goldman Sachs Research’s Asia internet team. “Some of these Chinese models have driven the industry to focus not just on raising the performance, but also on lowering the cost.”
We spoke with Keung about his take on the new generative AI models in China, the cost savings they may provide, and breakthroughs they may enable. As costs decline and AI models become smarter, Keung says, we may be a small step closer to reaching artificial general intelligence — an AI that displays excellence across all human fields of knowledge.
What is important about the recent AI developments in China?
Three Chinese AI models were launched last week, as well as two multi-modal text-to-image models this week. And while most of the attention has been on DeepSeek’s new model, the other models are at around the same level in terms of performance and cost per token (a token is a small unit of text).
The cost of inferencing (the stage that comes after training, when an AI model works with content that it has never seen before) has fallen by more than 95% in China over the past year. We expect this much lower inferencing cost to drive a proliferation of generative AI applications.
Some of the models launched over the past week are focused on deep thinking modes or reasoning. That means that the chat bot goes through each of its steps when you ask a question, telling you what it’s thinking before it arrives at an answer. That takes around 5-20 seconds for every question.
The process makes sense when you look at how human beings interact — if you ask me a question, and then I give you an immediate answer in milliseconds, then the chance is that I might not have thought it through. These models think before they speak.
The performances of these models seem to have improved a lot as a result. It\’s mostly because they assess their own answers before giving a final output.
Are these developments likely to change the way that capital is invested in AI?
Chinese players have been focused on driving the lowest cost, and also maybe trying to use minimal chips in doing the same tasks. I think over the last week, there\’s also been more focus on whether edge computing is becoming more popular, which could allow smaller AI models to run on your phone or computer without connecting to mega data centers. I think these are all questions that investors have on how the landscape will evolve.
What is clear to us is that lowering the cost of AI models will drive much higher adoption, as it would make the models much cheaper to use in future.
Both our research teams in China and our US teams expect this year to be the year of AI agents and applications. The good news is that some of these Chinese models have pushed the industry to focus not just on raising the performance, but also on lowering the cost. That should drive higher and higher adoption of artificial intelligence.
How much cheaper are the AI models in China relative to the incumbent AI providers in the US?
When it comes to how much the companies charge per use of the model, which is measured on a per-token basis, the charges are significantly lower. As of last weekend, a Chinese AI model’s pricing was 14 cents per million input tokens. That’s only a single-digit percentage of the amount that an equivalent reasoning model from a large US technology company charges.
It’s clear that prices are starting to come down as a result. Already, we’ve seen some US big tech companies adjust their pricing, including making some of their paid models free. So I think there will be a continuing race on efficiencies.
Your colleagues from the US tech team have said that the market reaction is being driven by questions around the amount of capital expenditure needed to scale AI, the return on investment on the money already spent, and the pace of investment going forward. What is your take on those questions as it relates to Chinese firms?
I think that applies a lot more to US companies. The major Chinese internet giants increased capital expenditure by 61% last year — but that was from a low base.
The listed Chinese companies have not been spending as much on capex over the last two years, in aggregate. Instead, they’ve been very focused on shareholder returns. Spending has only just started to pick up for these companies in 2024. However, in absolute terms, Chinese Internet companies have been spending just a fraction of what their global counterparts have been spending, so there are fewer questions on the return-on-investment that they can expect from high spending on AI.
Investor focus on AI has been concentrated on the US. Do you think this will mean more investor interest in Chinese companies in the future?
It\’s a bit early to tell. Overall, the China Internet base is still at a multi-year valuation gap versus US peers.
Geopolitics will continue to create uncertainties. Given some of the US chip bans and the scrutiny on Chinese companies, I think investors are still relatively risk averse on China.
Our team’s top ideas this year for investing in the China internet theme have been companies with domestic exposure and with solid earnings profiles, which are in a position to benefit from the domestic consumption policy stimulus. I think, at this point, the Chinese equity market has not priced in the option of these companies going global, despite a lot of these apps continuing to appear among the top app downloads over the past months.
Will this breakthrough enable smaller companies to deploy AI models in future? Or do you think the technology will still be concentrated in the hands of a few big companies?
There are questions about whether the AI models themselves will be dominated by a few players, or whether they will become increasingly commoditized. But that\’s just on the model layer. There\’s also an infrastructure and computing layer, where I think the higher adoption brought by these new models will drive higher cloud demand for the hyperscalers (the largest cloud computing companies). That should benefit the leading players.
The application side of things will likely be full of surprises. Different companies are trying out how to deploy this technology. That could mean helping their advertising business through better ad placement. Or companies could create a super AI assistant app that could help you do whatever you could think of: from booking a flight ticket to helping you work out what time and where to meet your friends. We think that a social app with transaction capabilities would stand a higher chance of success in integrating AI assistant functions.
But I think the jury is still out on whether the AI landscape will be more fragmented, or whether it will be like the internet era, when a few really big applications dominated. Will AI bring some of that as well, forming some completely new applications? Or is it the existing players that benefit from this?