Skip to main content
See every side of every news story
Published loading...Updated

Deepseek Presents Model with Scattered Attention to Reduce Inference Costs by Half

Summary by WWWhat's New
The Chinese company DeepSeek has launched an experimental model called V3.2-exp, aimed at optimizing the performance of operations in long contexts, one of the great technical challenges in the current language models. This version introduces an innovative system called DeepSeek Sparse Attention, whose aim is to minimize the computational load and thereby significantly reduce inference costs. When we talk about inference, we refer to the process…
DisclaimerThis story is only covered by news sources that have yet to be evaluated by the independent media monitoring agencies we use to assess the quality and reliability of news outlets on our platform. Learn more here.

1 Articles

The Chinese company DeepSeek has launched an experimental model called V3.2-exp, aimed at optimizing the performance of operations in long contexts, one of the great technical challenges in the current language models. This version introduces an innovative system called DeepSeek Sparse Attention, whose aim is to minimize the computational load and thereby significantly reduce inference costs. When we talk about inference, we refer to the process…

Think freely.Subscribe and get full access to Ground NewsSubscriptions start at $9.99/yearSubscribe

Bias Distribution

  • There is no tracked Bias information for the sources covering this story.

Factuality 

To view factuality data please Upgrade to Premium

Ownership

To view ownership data please Upgrade to Vantage

WWWhat's new broke the news in on Wednesday, October 1, 2025.
Sources are mostly out of (0)
News
For You
Search
BlindspotLocal