Moore Threads follows a fabless model. It keeps GPU architecture, chip design, board design, and system validation in house, while outsourcing wafer manufacturing, packaging, testing, and board assembly to specialist partners. The company generates revenue from GPU cards, AI all-in-one systems, and intelligent computing clusters, and it sells through both direct sales and distributors. In 2025, direct sales accounted for 72.86% of revenue, distributors for 27.14%, and 99.87% of revenue came from China.
- Cloud AI infrastructure is the core business.
Moore Threads has moved beyond selling standalone chips and now focuses on cloud-side AI compute products such as training and inference cards, integrated servers, and large clusters. In 2025, cloud products generated RMB 1.461 billion of revenue, equal to about 97.0% of total revenue. That makes the company’s commercial model heavily tied to AI infrastructure build-outs, large model training, inference workloads, and related data center demand.
- The second pillar is enterprise graphics and metacomputing.
Moore Threads uses the same GPU base to serve cloud desktops, real-time rendering, digital twins, and professional visualization. The MTT S3000 is positioned around graphics rendering, video processing, and deep learning, and the annual report highlights cloud desktop and cloud rendering as established deployment areas. This broadens the addressable market beyond pure AI training and gives the company a more diversified usage profile than an AI-only accelerator vendor.
- Edge and terminal products are strategic, but still small.
The company is also building products for edge AI and end devices, including AI notebooks and intelligent modules. Management describes this layer as a heterogeneous stack that combines GPU, CPU, NPU, and VPU resources for local AI workloads. Financially, the segment is still early. Edge and terminal products contributed roughly 1.7% of 2025 revenue, so the investment case still depends far more on cloud infrastructure than on client devices.
- Software is a key part of the moat.
Moore Threads has built its own MUSA architecture and software stack, with support that spans AI acceleration, graphics rendering, video processing, and scientific computing. It has also extended that stack from chips to boards to clusters. In practice, this means Moore Threads is selling a domestic compute platform, not only a GPU chip, which matters in China’s market where software compatibility and cluster orchestration are as important as raw silicon performance.
Moore Threads’ market position comes from breadth. In its 2025 annual report, the company says it is one of the few domestic vendors already shipping full-function GPUs at commercial scale. By the end of 2025, it had released five GPU architectures spanning cloud, edge, and terminal use cases. The same report says its MTT S5000 sits in the first tier for domestic AI infrastructure and that Moore Threads has already commercialized thousand-card and ten-thousand-card clusters, with the Huagang architecture designed for clusters above 100,000 cards. That places Moore Threads in a stronger position than many local peers on product range, software depth, and cluster delivery.
The latest numbers support that positioning. In Q1 2026, revenue rose 155.35% year over year to RMB 737.6 million and net profit turned positive at RMB 29.4 million. For now, that looks like early proof that Moore Threads’ cloud-first business model is translating into real commercial scale.