Open source AI

Four Chinese Labs Release Open-Weight Coding Models in a 12-Day Window


Read · 2 min

Four Chinese Labs Release Open-Weight Coding Models in a 12-Day Window

Between mid-April and the end of April 2026, four Chinese AI labs released open-weight coding models in a 12-day window: Z.ai's GLM-5.1, MiniMax M2.7, Moonshot AI's Kimi K2.6, and DeepSeek V4. The competitive density of that release schedule is itself the story — it is what a fast-moving, well-capitalised AI ecosystem looks like when it is in catch-up mode and racing.

The four releases

GLM-5.1 from Z.ai (the Tsinghua-spinout Zhipu AI's consumer brand) targets coding and tool-use, with reported strong performance on SWE-bench-style multi-step engineering tasks. MiniMax M2.7 emphasises long-context reasoning and is sized for cost-efficient inference on commodity hardware. Moonshot's Kimi K2.6 doubles down on long-context (Moonshot's signature) and adds improved tool-use reasoning. DeepSeek V4 is the latest in the lab's relentless cadence of efficient, well-optimised open-weight releases that have repeatedly punched above their weight on Western benchmarks.

Why open-weight

Three reasons converge. First, distribution: open weights get adopted by the global developer community in a way closed-API products don't. Second, regulatory: open-weight releases sidestep some of the export-control and use-restriction frictions that closed Chinese AI services face in international markets. Third, competitive positioning against US closed-source labs: every meaningful open-weight release from a Chinese lab subtracts incremental pricing power from OpenAI, Anthropic and Google in the global enterprise market.

The capability gap

Closing, but not closed. The best Chinese open-weight models in 2026 are competitive with mid-2025 frontier closed models on specific benchmarks. They lag Claude 4.5 and GPT-5.1 on the hardest coding and reasoning tasks. They beat almost everything else available open-weight from Western labs. The gap pattern — open-weight Chinese models tracking closed Western frontier with roughly a six-month lag — is now the structural picture of the global model landscape.

What this means commercially

Enterprise buyers in 2026 increasingly have a real open-weight option. Self-hosting a Kimi K2.6 or DeepSeek V4 on owned or rented infrastructure is operationally viable for medium and large enterprises with even modest ML capability. For European banks, public-sector buyers and any organisation with sovereignty concerns about US-controlled APIs, that option is not academic — it is a procurement reality.

The geopolitical wrinkle

China's NDRC blocked Meta's $2 billion acquisition of Manus on 27 April 2026 over what officials described as foreign attempts to "hollow out" the country's AI base. The same week saw four Chinese labs aggressively releasing open-weight models internationally. The contradictions are intentional. Beijing wants to retain strategic AI assets at home and disseminate Chinese-built AI tooling abroad. The four open-weight releases are the second half of that strategy at work.

What is the gap to Western frontier?
Roughly six months on the hardest coding and reasoning tasks; closer or even on par on many specific benchmarks.
Why open-weight?
Distribution, sidestepping export-control frictions, and pricing pressure on closed Western labs.
Who is leading among the four?
DeepSeek V4 has been most efficient on capability-per-dollar; GLM-5.1 strongest on tool-use; Kimi K2.6 on long-context.

See more on: Ai, China, Open Source, Deepseek

A look at recent reporting on tech & science from the Étude newsroom.


navigateopenescclose