2 min readfrom Machine Learning

Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion [R]

Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion [R]
Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion [R]

Idea: Inject a trainable diffusion attention module into each layer of a frozen AR Transformer. Both heads share one KV cache. Diffusion head projects K=32 tokens in parallel; AR head verifies in a second pass and accepts the longest matching prefix. Output distribution is provably identical to the base model.

Results:

  • Up to 7.8× TPF, ~6× wall-clock on MATH-500.
  • 16% of params trained, <1B tokens, 24h on 8×H200.
  • vs. diffusion LMs (Dream, Fast-dLLM-v2, SDAR, Mercury, Gemini Diffusion): they modify base weights and lose accuracy (Fast-dLLM-v2: -11 pts on MATH-500). Orthrus freezes the backbone; accuracy matches Qwen3-8B exactly.
  • vs. Speculative Decoding (EAGLE-3, DFlash): No external drafter, no separate cache, and zero Time-To-First-Token (TTFT) penalty because we don't have to initialize and sync a separate drafter model. KV overhead is O(1) (~4.5 MiB flat). Acceptance length on MATH-500: 11.7 vs. 7.9 (DFlash) vs. 3.5 (EAGLE-3).
  • Single-step denoising beats multi-step (6.35 vs. 3.53 TPF). KL distillation beats CE on acceptance rate.

Limitations: strictly bounded by the frozen base model (inherits its biases, hallucinations, knowledge gaps); Qwen3-only evaluation; greedy + rejection sampling only.

https://i.redd.it/5lsf6l5w4c1h1.gif

submitted by /u/Franck_Dernoncourt
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#rows.com
#AI formula generation techniques
#row zero
#real-time data collaboration
#no-code spreadsheet solutions
#real-time collaboration
#Orthrus
#Memory-Efficient
#Parallel Token Generation
#Dual-View Diffusion
#AR Transformer
#KV cache
#Diffusion head
#Token projection
#Acceptance length
#Time-To-First-Token (TTFT)
#Speculative Decoding
#Single-step denoising
#KL distillation
#ce on acceptance rate