Lexicon: How China talks about ‘agentic AI’
Chinese developers are hard at work, but specific regulation is nascent
By
andThree months after the Chinese AI company DeepSeek shocked global markets with a highly capable reasoning model, another China-linked company made a splash with a capable agentic AI system. Did Manus, released in March 2025, portend Chinese leadership in AI systems that go beyond chatbots to take action on the user’s behalf? Victor Mustar, head of product at Hugging Face described Manus’ capabilities as “mind-blowing, redefining what’s possible.” A journalist’s comparison with ChatGPT DeepResearch found that Manus provided better results, despite speed and stability issues.
Manus had been released by a Singapore-based firm but developed by a startup in Wuhan with backing from the Chinese tech giant Tencent. It wasn’t China’s only foray into the emerging field. The same month, the Beijing-based firm Zhipu AI launched AutoGLM-Rumination, an open-source agentic system the company said achieved “state-of-the-art” scores on benchmarks such as AgentBench. (Zhipu also announced an “international alliance” for autonomous AI models, to include 10 countries associated with the Belt and Road Initiative and from ASEAN.) Earlier in January, Alibaba released the Qwen-Agent framework for building agentic systems with its Qwen models. ByteDance followed with its Coze Studio platform in July. Last month, Tencent open-sourced Youtu-Agent agentic framework, which was reportedly built atop a DeepSeek model.
With so much action this year in Chinese “agentic” AI efforts, it’s worth pausing to ask what Chinese developers mean when they talk about agentic AI. Moreover, what does the proliferation of such systems in China mean for AI safety and governance in the country?
Lexicon: What we talk about when we talk about ‘agentic’
While “agentic AI” is a widely recognized phrase in English-language discussions of AI systems with the ability to perform complex actions, there is some inconsistency in how the technology is discussed in Chinese. The most common terms can be split into two groups:
First, there are those that render “agent” or “agentic” using the word 代理 dàilǐ, which can mean “agent” in the sense that a dàilǐ can serve as a representative or proxy. It connotes action on someone’s behalf. Dàilǐ is also a verb meaning to act as an agent or representative of someone else. These forms include 代理型人工智能 dàilǐxíng réngōngzhìnéng or 代理式AI dàilǐshì AI (Agent-type AI) and AI代理 AI dàilǐ (AI agents).
Second, there are those that emphasize the autonomy or independence of the AI system as an actor in the world. These include 自主代理 zìzhǔ dàilǐ (autonomous agent), 自主智能体 zìzhǔ zhìnéngtǐ (autonomous intelligent entity), and 智能体 zhìnéngtǐ(intelligent entity, or an intelligence).
Chinese government publications such as a new standard on AI agentsreleased on May 27 by the China Academy of Information and Communications Technology (CAICT) use the term 智能代理(intelligent agents). An earlier 2024 report on LLM benchmarking by the CAICT uses both 智能体 (intelligent entity) and 智能代理 (intelligent agent) and sometimes adds on a translation, clarifying how usage compares with English, e.g. “智能体(AGENT).” In academic journal articles, common terms include 大模型智能体 dàmóxíng zhìnéngtǐ (large model intelligent agents), 智能体 (intelligent entity),and 智能代理 (intelligent agent). Overall, the concurrent use of several different terms across Chinese-language discourse indicates that the discursive definition of an agentic AI system is still underway as the technology matures.
Soft law and policy discourse
The regulatory context for these various systems is evolving. Some aspects of what might be considered agentic AI systems are already regulated in Chinese law. If they constitute recommendation algorithms, generative AI systems, or otherwise do things the government regulates regardless of the underlying technology, they are already regulated. When it comes to direct references to agentic systems, the action so far has come in the form of standards or specifications, sometimes considered “soft law.” Though such documents are generally nonbinding unless specified, when published by government think tanks, they can act as valuable guidance for businesses navigating an evolving regulatory space.
On March 27, 2025, CAICT, in collaboration with more than 20 Chinese AI companies including Tencent, Alibaba, and Huawei, reportedly released a standard entitled “Technical and Application Requirements for Intelligent Agents in Software Engineering – Part 1: Development of Intelligent Agents” (《软件工程智能代理技术与应用要求 - 第1部分:智能代理的开发》). The standard reportedly focuses on capacity-building and application requirements for AI agent development, particularly regarding the technical and service capabilities of models. The release of this standard was heralded as a “new phase” in R&D and application of AI agents as part of a ramping up of AI agent commercialization in China and globally. The full text does not appear to be publicly available.
The same week, the International Telecommunication Union Telecommunication Standardization Sector (ITU-T) approved an international recommendation on AI agents, reported by Chinese state media as spearheaded by CAICT. The ITU recommendation, titled “ITU-T F.748.46: Requirements and Evaluation Methods of Artificial Intelligence Agents Based on Large Scale Pre-Trained Model,” focuses on standards for evaluations that assess the capabilities of AI agents in tasks like perception, planning, execution, and collaborative interaction to enhance performance, both for single-agent and multi-agent systems.
CAICT is also a center of China’s published policy thinking on agentic AI. In 2023, its annual “Blue Book Report on Large Model Governance” offered foundational guidance on assessing large-scale AI models and mentioned AI agents as part of a larger category of decision-making AI that should be regulated.
In June 2024, CAICT published the Large Model Benchmarking System Research Report (2024), which discussed AI agents as part of a broader effort to introduce national benchmarking standards for the Chinese AI industry. In particular, the report raised the need for benchmarks to incorporate measures suited to assessing AI agents through the construction of evaluation environments. Building on this, the Fang Sheng (方升) benchmarking system introduced in the report placed a strong focus on “application-oriented testing” (大模型的应用测试, AOT) of large models to test challenges in their deployment in business settings. This includes considerations on how integration of large models with external tools to perform more complex tasks will require the use of AI agents.
In December 2024, CAICT published the Blue Book on Artificial Intelligence Governance (2024), which, while not mentioning AI agents explicitly, does include broader reflections on the ethical implications of human-AI relationships, including who will be held responsible for decisions taken by AI systems and the impact emotional dependency on AI may have on human autonomy and relationships. The report also includes governance recommendations for full-lifecycle (全生命周期) oversight of AI systems across R&D, regulation, and third-party evaluation bodies, which can also apply to agentic AI systems.
In 2025, the debate on agentic AI in China aligns with a broader focus shift towards AI competition and large-scale AI deployment across industries. This policy direction is perhaps best captured in Xi Jinping’s address on April 26, 2025, during the second Politburo study session on AI, which reaffirms China’s leadership aspirations in AI and emphasizes self-reliance and application-oriented development. However, it also highlights the role of the government in ensuring AI is “safe/secure, reliable, and controllable,” by “speed[ing] up the formulation and improvement of the relevant laws and regulations, policy system, application specifications, and ethical criteria, construct[ing] systems for technology monitoring, early warning for risks, and emergency response,” etc.
However one defines it, Chinese technological and policy developments in agentic or autonomous AI are advancing rapidly. Concrete regulation of this class of technologies remains embedded in preexisting legal and regulatory systems around data governance, cybersecurity, recommendation algorithms, and generative AI. The policy thinking embodied in CAICT’s standard and reports, meanwhile, points to the potential for governance to evolve with the technology’s application.
About DigiChina
Housed within the Program on Geopolitics, Technology and Governance at Stanford University, DigiChina is a cross-organization, collaborative project to provide translations, factual context, and analysis on Chinese technology policy. More at digichina.stanford.edu.




