•2 min read•from Machine Learning
Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion [R]
![Orthrus: Memory-Efficient Parallel Token Generation via Dual-View Diffusion [R]](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2F5lsf6l5w4c1h1.gif%3Fframe%3D1%26width%3D140%26height%3D72%26auto%3Dwebp%26s%3D023b02ee925924f982e6eea09bbeefbe039d00ab&w=3840&q=75)
|
Idea: Inject a trainable diffusion attention module into each layer of a frozen AR Transformer. Both heads share one KV cache. Diffusion head projects K=32 tokens in parallel; AR head verifies in a second pass and accepts the longest matching prefix. Output distribution is provably identical to the base model. Results:
Limitations: strictly bounded by the frozen base model (inherits its biases, hallucinations, knowledge gaps); Qwen3-only evaluation; greedy + rejection sampling only. [link] [comments] |
Want to read more?
Check out the full article on the original site
Tagged with
#rows.com
#AI formula generation techniques
#row zero
#real-time data collaboration
#no-code spreadsheet solutions
#real-time collaboration
#Orthrus
#Memory-Efficient
#Parallel Token Generation
#Dual-View Diffusion
#AR Transformer
#KV cache
#Diffusion head
#Token projection
#Acceptance length
#Time-To-First-Token (TTFT)
#Speculative Decoding
#Single-step denoising
#KL distillation
#ce on acceptance rate