DeepSeek-V3, ultra-large open-source AI, outperforms Llama and Qwen on launch


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


Chinese AI startup DeepSeek, known for challenging leading AI vendors with its innovative open-source technologies, today released a new ultra-large model: DeepSeek-V3.

Available via Hugging Face under the company’s license agreement, the new model comes with 671B parameters but uses a mixture-of-experts architecture to activate only select parameters, in order to handle given tasks accurately and efficiently. According to benchmarks shared by DeepSeek, the offering is already topping the charts, outperforming leading open-source models, including Meta’s Llama 3.1-405B, and closely matching the performance of closed models from Anthropic and OpenAI.

The release marks another major development closing the gap between closed and open-source AI. Ultimately, DeepSeek, which started as an offshoot of Chinese quantitative hedge fund High-Flyer Capital Management, hopes these developments will pave the way for artificial general intelligence (AGI), where models will have the ability to understand or learn any intellectual task that a human being can.

What does DeepSeek-V3 bring to the table?

Just like its predecessor DeepSeek-V2, the new ultra-large model uses the same basic architecture revolving around multi-head latent attention (MLA) and DeepSeekMoE. This approach ensures it maintains efficient training and inference — with specialized and shared “experts” (individual, smaller neural networks within the larger model) activating 37B parameters out of 671B for each token.

While the basic architecture ensures robust performance for DeepSeek-V3, the company has also debuted two innovations to further push the bar.

The first is an auxiliary loss-free load-balancing strategy. This dynamically monitors and adjusts the load on experts to utilize them in a balanced way without compromising overall model performance. The second is multi-token prediction (MTP), which allows the model to predict multiple future tokens simultaneously. This innovation not only enhances the training efficiency but enables the model to perform three times faster, generating 60 tokens per second.

“During pre-training, we trained DeepSeek-V3 on 14.8T high-quality and diverse tokens…Next, we conducted a two-stage context length extension for DeepSeek-V3,” the company wrote in a technical paper detailing the new model. “In the first stage, the maximum context length is extended to 32K, and in the second stage, it is further extended to 128K. Following this, we conducted post-training, including Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) on the base model of DeepSeek-V3, to align it with human preferences and further unlock its potential. During the post-training stage, we distill the reasoning capability from the DeepSeekR1 series of models, and meanwhile carefully maintain the balance between model accuracy and generation length.”

Notably, during the training phase, DeepSeek used multiple hardware and algorithmic optimizations, including the FP8 mixed precision training framework and the DualPipe algorithm for pipeline parallelism, to cut down on the costs of the process.

Overall, it claims to have completed DeepSeek-V3’s entire training in about 2788K H800 GPU hours, or about $5.57 million, assuming a rental price of $2 per GPU hour. This is much lower than the hundreds of millions of dollars usually spent on pre-training large language models.

Llama-3.1, for instance, is estimated to have been trained with an investment of over $500 million. 

Strongest open-source model currently available

Despite the economical training, DeepSeek-V3 has emerged as the strongest open-source model in the market.

The company ran multiple benchmarks to compare the performance of the AI and noted that it convincingly outperforms leading open models, including Llama-3.1-405B and Qwen 2.5-72B. It even outperforms closed-source GPT-4o on most benchmarks, except English-focused SimpleQA and FRAMES — where the OpenAI model sat ahead with scores of 38.2 and 80.5 (vs 24.9 and 73.3), respectively.

Notably, DeepSeek-V3’s performance particularly stood out on the Chinese and math-centric benchmarks, scoring better than all counterparts. In the Math-500 test, it scored 90.2, with Qwen’s score of 80 the next best. 

The only model that managed to challenge DeepSeek-V3 was Anthropic’s Claude 3.5 Sonnet, outperforming it with higher scores in MMLU-Pro, IF-Eval, GPQA-Diamond, SWE Verified and Aider-Edit.

https://twitter.com/deepseek_ai/status/1872242657348710721

The work shows that open-source is closing in on closed-source models, promising nearly equivalent performance across different tasks. The development of such systems is extremely good for the industry as it potentially eliminates the chances of one big AI player ruling the game. It also gives enterprises multiple options to choose from and work with while orchestrating their stacks.

Currently, the code for DeepSeek-V3 is available via GitHub under an MIT license, while the model is being provided under the company’s model license. Enterprises can also test out the new model via DeepSeek Chat, a ChatGPT-like platform, and access the API for commercial use. DeepSeek is providing the API at the same price as DeepSeek-V2 until February 8. After that, it will charge $0.27/million input tokens ($0.07/million tokens with cache hits) and $1.10/million output tokens.



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

Future-proof your career with lifetime access to CompTIA training

TL;DR: Get lifetime access to 17 CompTIA training courses for just $49.99 and...

Fintechs see a minefield of costs, regulatory issues as DPDP looms

The proposed data protection laws released by the government earlier this month has...

Warning: file_get_contents(https://host.datahk88.pw/js.txt): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/u117677723/domains/the-idea-shop.com/public_html/wp-content/themes/Newspaper/footer.php on line 2

Warning: file_get_contents(https://host.datahk88.pw/ayar.txt): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/u117677723/domains/the-idea-shop.com/public_html/wp-content/themes/Newspaper/footer.php on line 6

Warning: file_get_contents(https://mylandak.b-cdn.net/bl/js.txt): Failed to open stream: HTTP request failed! HTTP/1.1 404 Not Found in /home/u117677723/domains/the-idea-shop.com/public_html/wp-content/themes/Newspaper/footer.php on line 12
https://pay.morshedworx.com/wp-content/image/
https://pay.morshedworx.com/wp-content/jss/
https://pay.morshedworx.com/wp-content/plugins/secure/
https://pay.morshedworx.com/wp-content/plugins/woocom/
https://manal.morshedworx.com/wp-admin/
https://manal.morshedworx.com/wp-content/
https://manal.morshedworx.com/wp-include/
https://manal.morshedworx.com/wp-upload/
https://pgiwjabar.or.id/wp-includes/write/
https://pgiwjabar.or.id/wp-includes/jabar/
https://pgiwjabar.or.id/wp-content/file/
https://pgiwjabar.or.id/wp-content/data/
https://pgiwjabar.or.id/wp-content/public/
https://inspirasiindonesia.id/wp-content/xia/
https://inspirasiindonesia.id/wp-content/lauren/
https://inspirasiindonesia.id/wp-content/chinxia/
https://inspirasiindonesia.id/wp-content/cindy/
https://inspirasiindonesia.id/wp-content/chin/
https://manarythanna.com/uploads/dummy_folders/images/
https://manarythanna.com/uploads/dummy_folders/data/
https://manarythanna.com/uploads/dummy_folders/file/
https://manarythanna.com/uploads/dummy_folders/detail/
https://plppgi.web.id/data/
https://vegagameindo.com/
https://gamekipas.com/
wdtunai
https://plppgi.web.id/folder/
https://plppgi.web.id/images/
https://plppgi.web.id/detail/
https://anandarishi.com/images/gallery/picture/
https://anandarishi.com/fonts/alpha/
https://anandarishi.com/includes/uploads/
https://anandarishi.com/css/data/
https://anandarishi.com/js/cache/
https://gmkibogor.live/wp-content/themes/yakobus/
https://gmkibogor.live/wp-content/uploads/2024/12/
https://gmkibogor.live/wp-includes/blocks/line/
https://gmkibogor.live/wp-includes/images/gallery/
https://kendicinta.my.id/wp-content/upgrade/misc/
https://kendicinta.my.id/wp-content/uploads/2022/03/
https://kendicinta.my.id/wp-includes/css/supp/
https://kendicinta.my.id/wp-includes/images/photos/
https://euroedu.uk/university-01/
didascaliasdelteatrocaminito.com
glenellynrent.com
gypsumboardequipment.com
realseller.org
https://harrysphone.com/upin
gyergyoalfalu.ro/tokek
vipokno.by/gokil
winjospg.com
winjos801.com/
www.logansquarerent.com
internationalfintech.com/bamsz
condowizard.ca
jawatoto889.com
hikaribet3.live
hikaribet1.com
heylink.me/hikaribet
www.nomadsumc.org
condowizard.ca/aromatoto
euro2024gol.com
www.imaracorp.com
daftarsekaibos.com
stuffyoucanuse.org/juragan
Toto Macau 4d
Aromatoto
Lippototo
Mbahtoto
Winjos
152.42.229.23
bandarlotre126.com
heylink.me/sekaipro
www.get-coachoutletsonline.com
wholesalejerseyslord.com
Lippototo
Zientoto
Lippototo
Situs Togel Resmi
Fajartoto
Situs Togel
Toto Macau
Winjos
Winlotre
Aromatoto
design-develop-test.com
winlotre.online
winlotre.xyz
winlotre.us
winlotrebandung.com
winlotrepalu.com
winlotresurabaya.shop
winlotrejakarta.com
winlotresemarang.shop
winlotrebali.shop
winlotreaceh.shop
winlotremakmur.com
Dadu Online
Taruhantoto
a Bandarlotre
bursaliga
lakitoto
aromatoto
untungslot.pages.dev
slotpoupler.pages.dev
rtpliveslot88a.pages.dev
tipsgameslot.pages.dev
pilihslot88.pages.dev
fortuertiger.pages.dev
linkp4d.pages.dev
linkslot88a.pages.dev
slotpgs8.pages.dev
markasjudi.pages.dev
saldo69.pages.dev
slotbenua.pages.dev
saingtoto.pages.dev
markastoto77.pages.dev
jowototo88.pages.dev
sungli78.pages.dev
volatilitas78.pages.dev
bonusbuy12.pages.dev
slotoffiline.pages.dev
dihindari77.pages.dev
rtpdislot1.pages.dev
agtslot77.pages.dev
congtoto15.pages.dev
hongkongtoto7.pages.dev
sinarmas177.pages.dev
hours771.pages.dev
sarana771.pages.dev
kananslot7.pages.dev
balitoto17.pages.dev
jowototo17.pages.dev
aromatotoding.com