Transformers with Selective Access to Early Representations [R]
![Transformers with Selective Access to Early Representations [R]](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2Fbfj0qllk9fzg1.png%3Fwidth%3D140%26height%3D47%26auto%3Dwebp%26s%3Dafd139021e7256d039453286e5a71d859d7fe9bb&w=3840&q=75)
| Hello everyone. I’m excited to share our new paper! Figure 1: Comparison Across Architectures A lot of recent Transformer variants try to improve information flow across depth by exposing later layers to earlier representations. You may have recently heard about methods like DenseFormer, MUDDFormer, and HyperConnections, which add more dense or dynamic cross-layer pathways. These are expressive, but they can also come with meaningful throughput and memory costs. Our question was more specific: Can we improve the efficiency-performance tradeoff at scale by enabling more principled reuse of early representations? We introduce SATFormer, which keeps the same cheap first-layer value pathway used by value residual learning, but replaces static layer-wise mixing with a per-token, per-head, context-dependent gate. Instead of uniformly copying early features into every later layer, SATFormer learns when and where each head should re-access the first-layer value stream. Main results:
The core framing is that early-representation reuse may be better treated as a retrieval/control problem rather than a connectivity/maximal routing problem. OverllI am excited to discuss what some better approaches may be to improving the transformer architecture while maintaining a high throughput. Arxiv: https://arxiv.org/pdf/2605.03953 github (still WIP): https://github.com/SkyeGunasekaran/SATFormer [link] [comments] |
Want to read more?
Check out the full article on the original site