New Business
Another Chinese quant fund joins DeepSeek in AI race with model rivalling GPT-5.1, Claude

Another Chinese quantitative trading firm has entered the race to develop large language models (LLMs), unveiling systems it claims can match – and in some cases surpass – the performance of US rivals such as OpenAI’s GPT-5.1, following the global rise of DeepSeek.
Beijing-based Ubiquant said it released a series of open-source code-focused LLMs last week that outperformed leading closed-source models on multiple benchmarks despite using far fewer parameters. The IQuest-Coder-V1 family is designed for code intelligence, excelling at tasks such as automated programming, debugging and code explanation.
The series features models with 7 billion, 14 billion and 40 billion parameters, which are significantly smaller than the counts of leading closed-source systems such as OpenAI’s GPT-5.1 and Anthropic’s Claude Sonnet 4.5, a model touted by Anthropic as “the best coding model in the world”.
Despite their size, Ubiquant’s models have demonstrated elite-level performance across major programming benchmarks.
On the SWE-bench Verified, which measures an AI model’s ability to solve real-world software engineering problems, IQuest-Coder-V1-40B-Loop-Instruct scored 76.2 per cent, close to Claude Sonnet 4.5’s 77.2 per cent and GPT 5.1’s 76.3 per cent.

The model achieved 49.9 per cent in BigCodeBench, which evaluates LLMs on solving practical and challenging programming tasks without contamination, ahead of Gemini 3 Pro Preview’s 47.1 per cent and GPT-5.1’s 46.8 per cent.
Source link



