Alibaba Introduces Qwen3, Setting New Benchmark in Open-Source AI with Hybrid Reasoning 29.04.2025, 02:30 Uhr von EQS News Jetzt kommentieren: 0

Werte zum Artikel
Name Aktuell Diff. Börse
Alibaba Group 120,52 USD +0,97 % Nasdaq


EQS Newswire / 29/04/2025 / 02:30 CET/CEST

HANGZHOU, CHINA - Media OutReach Newswire - 29 April 2025 - Alibaba has launched Qwen3, the latest generation of its open-sourced large language model (LLM) family, setting a new benchmark for AI innovation.

The Qwen3 series features six dense models and two Mixture-of-Experts (MoE) models, offering developers flexibility to build next-generation applications across mobile devices, smart glasses, autonomous vehicles, robotics and beyond.

All Qwen3 models – including dense models (0.6B, 1.7B, 4B, 8B, 14B, and 32B parameters) and MoE models (30B with 3B active, and 235B with 22B active) – are now open sourced and available globally.

Hybrid Reasoning Combining Thinking and Non-thinking Modes

Qwen3 marks Alibaba's debut of hybrid reasoning models, combining traditional LLM capabilities with advanced, dynamic reasoning. Qwen3 models can seamlessly switch between thinking mode for complex, multi-step tasks such as mathematics, coding, and logical deduction and non-thinking mode for fast, general-purpose responses.

For developers accessing Qwen3 through API, the model offers granular control over thinking duration (up to 38K tokens), enabling an optimized balance between intelligent performance and compute efficiency. Notably, the Qwen3-235B-A22B MoE model significantly lowers deployment costs compared to other state-of-the-art models, reinforcing Alibaba's commitment to accessible, high-performance AI.

Breakthroughs in Multilingual Skills, Agent Capabilities, Reasoning and Human Alignment

Trained on a massive dataset of 36 trillion tokens – double that of its predecessor Qwen2.5 -- Qwen3 delivers significant advancement on reasoning, instruction following, tool use and multilingual tasks.

Key capabilities include:

  • Multilingual Mastery: Supports 119 languages and dialects, with leading performance in translation and multilingual instruction-following.
  • Advanced Agent Integration: Natively supports the Model Context Protocol (MCP) and robust function-calling, leading open-source models in complex agent-based tasks.
  • Superior Reasoning: Surpasses previous Qwen models (QwQ in thinking mode and Qwen2.5 in non-thinking mode) in mathematics, coding, and logical reasoning benchmarks.
  • Enhanced Human Alignment: Delivers more natural creative writing, role-playing, and multi-turn dialogue experiences for more natural, engaging conversations.

626153-image-1-jpeg-550x.jpeg
Qwen3 models achieve top-tier results across industry benchmarks

Thanks to advancements in model architecture, increase in training data, and more effective training methods, Qwen3 models achieve top-tier results across industry benchmarks such as AIME25 (mathematical reasoning), LiveCodeBench (coding proficiency), BFCL (tool and function-calling capabilities), and Arena-Hard (benchmark for instruction-tuned LLMs). Additionally, to develop the hybrid reasoning model, a four-stage training process was implemented, which includes long chain-of-thought (CoT) cold start, reasoning-based reinforcement learning (RL), thinking mode fusion, and general RL.

Open Access to Drive Innovation

Qwen3 models are now freely available for download on Hugging Face, GitHub and ModelScope, and can be explored on chat.qwen.ai. API access will soon be available through Alibaba's AI model development platform Model Studio. Qwen3 also powers Alibaba's flagship AI super assistant application, Quark.

Since its debut, the Qwen model family has attracted over 300 million downloads worldwide. Developers have created more than 100,000 Qwen-based derivative models on Hugging Face, making Qwen one of the world's most widely adopted open-source AI model series.
Hashtag: #AlibabaCloud

The issuer is solely responsible for the content of this announcement.

About Alibaba Group

Alibaba Group's mission is to make it easy to do business anywhere. The company aims to build the future infrastructure of commerce. It envisions that its customers will meet, work and live at Alibaba, and that it will be a good company that lasts for 102 years. www.alibabagroup.com

225647
News Source: Alibaba Cloud

29/04/2025 Dissemination of a Financial Press Release, transmitted by EQS News.
The issuer is solely responsible for the content of this announcement.

Media archive at www.todayir.com

Kommentare (0) ... diskutiere mit.
Werbung

Handeln Sie Aktien bei SMARTBROKER+ für 0 Euro!* Profitieren Sie von kostenloser Depotführung, Zugriff auf 29 deutsche und internationale Börsenplätze und unschlagbar günstigen Konditionen – alles in einer innovativen, brandneuen App. Jetzt zu SMARTBROKER+ wechseln und durchstarten!

*Ab 500 EUR Ordervolumen über gettex. Zzgl. marktüblicher Spreads und Zuwendungen.

k.A. Pkt (k.A.) k.A. % k.A. Pkt
k.A. Pkt (k.A.) k.A. % k.A. Pkt
k.A. Pkt (k.A.) k.A. % k.A. Pkt
BörsenNEWS.de
Weitere News

Gestern 23:48 Uhr • Artikel • dpa-AFX

Gestern 23:12 Uhr • Artikel • dpa-AFX

Gestern 22:50 Uhr • Artikel • dpa-AFX

Gestern 21:04 Uhr • Artikel • dpa-AFX

Registrieren
Schreib den ersten Kommentar!

Dis­clai­mer: Die hier an­ge­bo­te­nen Bei­trä­ge die­nen aus­schließ­lich der In­for­ma­t­ion und stel­len kei­ne Kauf- bzw. Ver­kaufs­em­pfeh­lung­en dar. Sie sind we­der ex­pli­zit noch im­pli­zit als Zu­sich­er­ung ei­ner be­stim­mt­en Kurs­ent­wick­lung der ge­nan­nt­en Fi­nanz­in­stru­men­te oder als Handl­ungs­auf­for­der­ung zu ver­steh­en. Der Er­werb von Wert­pa­pier­en birgt Ri­si­ken, die zum To­tal­ver­lust des ein­ge­setz­ten Ka­pi­tals füh­ren kön­nen. Die In­for­ma­tion­en er­setz­en kei­ne, auf die in­di­vi­du­el­len Be­dür­fnis­se aus­ge­rich­te­te, fach­kun­di­ge An­la­ge­be­ra­tung. Ei­ne Haf­tung oder Ga­ran­tie für die Ak­tu­ali­tät, Rich­tig­keit, An­ge­mes­sen­heit und Vol­lständ­ig­keit der zur Ver­fü­gung ge­stel­lt­en In­for­ma­tion­en so­wie für Ver­mö­gens­schä­den wird we­der aus­drück­lich noch stil­lschwei­gend über­nom­men. Die Mar­kets In­side Me­dia GmbH hat auf die ver­öf­fent­lich­ten In­hal­te kei­ner­lei Ein­fluss und vor Ver­öf­fent­lich­ung der Bei­trä­ge kei­ne Ken­nt­nis über In­halt und Ge­gen­stand die­ser. Die Ver­öf­fent­lich­ung der na­ment­lich ge­kenn­zeich­net­en Bei­trä­ge er­folgt ei­gen­ver­ant­wort­lich durch Au­tor­en wie z.B. Gast­kom­men­ta­tor­en, Nach­richt­en­ag­en­tur­en, Un­ter­neh­men. In­fol­ge­des­sen kön­nen die In­hal­te der Bei­trä­ge auch nicht von An­la­ge­in­te­res­sen der Mar­kets In­side Me­dia GmbH und/oder sei­nen Mit­ar­bei­tern oder Or­ga­nen be­stim­mt sein. Die Gast­kom­men­ta­tor­en, Nach­rich­ten­ag­en­tur­en, Un­ter­neh­men ge­hör­en nicht der Re­dak­tion der Mar­kets In­side Me­dia GmbH an. Ihre Mei­nung­en spie­geln nicht not­wen­di­ger­wei­se die Mei­nung­en und Auf­fas­sung­en der Mar­kets In­side Me­dia GmbH und de­ren Mit­ar­bei­ter wie­der. Aus­führ­lich­er Dis­clai­mer