1 min readfrom Machine Learning

Training a number-aware embedding model + Text JEPA doesn't work too well + Text auto-encoders have a strange frequency bias [R][P]

Hi guys!

I've spent 1y trying to predict company growth from the full text of their 10-k filings.

It completely failed.

But I've had a lot of fun playing with encoder transformers and making them good at numbers (bypassing the tokenizer/prediction head when it sees one). I've MLM-trained a modified ModernBERT for this and it works really well. The model is available on HF: https://huggingface.co/edereynal/financial_bert

Then, I've made this MLM-trained model into a nice sequence embedder.

I've experimented with JEPA, but it failed.

The auto-encoder setup worked much better. But I encountered a strange frequency bias, where the decoder only cared about high-frequency information, and I had to mitigate it by adding a Contrastive Loss term.

I also investigated the tendency of transformers to have a low effective-dimensionality output space (compared to its input embedding space).

So, here's the technical blog post, that reads a bit like "how to waste 1,000 hours and $400 trying to solve an unsolvable real-world problem, but having a lot of fun along the way":

https://www.eloidereynal.com/p/i-spent-1-year-trying-to-predict

submitted by /u/Academic_Sleep1118
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#rows.com
#financial modeling with spreadsheets
#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#real-time data collaboration
#real-time collaboration
#embedding model
#ModernBERT
#10-k filings
#Text JEPA
#auto-encoders
#MLM-trained model
#frequency bias
#contrastive loss
#transformers
#sequence embedder
#decoder
#effective-dimensionality
#company growth prediction