1 min readfrom Machine Learning

Fixing Unsupervised Hyperbolic Contrastive Loss [D]

Hello all,

I am trying to implement Unsupervised Hyperbolic Contrastive Loss on the ImageNet-1k dataset. My results show that simple Euclidean unsupervised contrastive loss is much better than the hyperbolic version. Please help me understand the problem. I am using expmap() and projx() to ensure the embedding is on the Lorentzian manifold. Below is my code -

def hb_contrastive_loss(z, z1, model, temp=0.07):

z_to_neighbor = model.manifold.dist(z.unsqueeze(1), z1.unsqueeze(0))

labels = torch.arange(z.size(0), device=z.device)

logits = -z_to_neighbor / temp

loss = F.cross_entropy(logits, labels)

return loss

Current results for 1-NN accuracy:

Hyperbolic = 57%
Cosine = 64%

More information (if relevant):
Batch size = 2048
LR = 1e-4

submitted by /u/arjun_r_kaushik
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#no-code spreadsheet solutions
#rows.com
#natural language processing for spreadsheets
#generative AI for data analysis
#large dataset processing
#Excel alternatives for data analysis
#Unsupervised Hyperbolic Contrastive Loss
#Euclidean unsupervised contrastive loss
#hb_contrastive_loss
#ImageNet-1k
#1-NN accuracy
#Lorentzian manifold
#z_to_neighbor
#cross_entropy
#model.manifold.dist
#Hyperbolic
#Cosine
#expmap()
#projx()
#accuracy results