DeepSeek boasts 545% daily profitability potential

Chinese AI startup DeepSeek on Saturday revealed cost and revenue data for its popular V3 and R1 models, claiming a theoretical daily cost-profit ratio of up to 545%.
However, the company cautioned that actual revenue figures would be significantly lower.
This disclosure marks the first time the Hangzhou-based firm has shared insights into its profit margins from “inference” tasks—the phase following AI model training where algorithms generate predictions or perform tasks, such as chatbot interactions.
DeepSeek’s revelation comes amid heightened market sensitivity, as AI stocks outside China saw sharp declines in January following the global rise of web and app chatbots powered by its V1 and R3 models. The announcement could further shake investor confidence in the sector.
The sell-off was partly caused by DeepSeek’s claims that it spent less than $6 million on chips used to train the model, much less than what US rivals like OpenAI have spent.
The chips DeepSeek claims it used, Nvidia’s H800, are also much less powerful than what OpenAI and other US AI firms have access to, making investors question even further US AI firms’ pledges to spend billions of dollars on cutting-edge chips.
DeepSeek said in a GitHub post published on Saturday that assuming the cost of renting one H800 chip is $2 per hour, the total daily inference cost for its V3 and R1 models is $87,072. In contrast, the theoretical daily revenue generated by these models is $562,027, leading to a cost-profit ratio of 545%. In a year this would add up to just over $200 million in revenue.
However, the firm added that its “actual revenue is substantially lower” because the cost of using its V3 model is lower than the R1 model, only some services are monetised as web and app access remain free, and developers pay less during off-peak hours.
Source link