2 min readfrom Machine Learning

I scaled a pure Spiking Neural Network (SNN) to 1.088B parameters from scratch. Ran out of budget, but here is what I found [R]

Hey everyone. I’m an 18yo indie dev, and I’ve been experimenting with Spiking Neural Networks (SNNs) for language modeling. A lot of papers (like SpikeBERT) mention that training 1B+ SNNs directly from random initialization fails due to vanishing gradients, so people usually do ANN-to-SNN conversion or distillation. I wanted to see if I could force it to converge purely in the spike domain. I had to stop at 27k steps because my wallet is literally empty lol, but the loss converged to 4.4.

Here are the most interesting things that happened:

  1. Massive Sparsity: It maintains ~93% sparsity. Only about 7% of neurons fire per token. It's incredibly cheap on memory during inference compared to dense models.
  2. Cross-lingual emergence: Around step 25K, it randomly started generating structurally correct Russian text, even though it wasn't explicitly targeted/weighted for it in the dataset mix.
  3. Memory routing shift: As I scaled the architecture past 600M to 1B, the model spontaneously shifted 39% of its activation routing into the persistent memory module. It basically learned on its own that memory is more valuable at a larger scale.

Limitations (Being honest):
The text generation is still janky and nowhere near GPT-2 fluency yet. The loss (4.4) is high, mostly because I couldn't train it longer. But proving that a 1B pure SNN can converge from random init feels like a solid milestone.

I'm sharing this because I'd love some harsh technical feedback.

  1. Does anyone here have experience with neuromorphic hardware? Would an architecture like this map well to Loihi?
  2. If anyone has tips on pushing SNN loss lower or stabilizing surrogate gradients further, I'm all ears.

The code, architecture details, and the 12GB full training checkpoint (weights + optimizer states) are on my GitHub

submitted by /u/zemondza
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#natural language processing for spreadsheets
#financial modeling with spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#rows.com
#AI formula generation techniques
#financial modeling
#large dataset processing
#no-code spreadsheet solutions
#natural language processing
#Spiking Neural Network
#SNN
#sparsity
#language modeling
#vanishing gradients
#cross-lingual emergence
#parameters
#training
#memory routing
#persistent memory module