5 min readfrom Machine Learning

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence
[D] ICML 2026 review policy debate: 100 responses suggest Policy B may score higher, while Policy A shows higher confidence

A week ago I made a thread asking whether ICML 2026’s review policy might have affected review outcomes, especially whether Policy A papers may have been judged more harshly than Policy B papers.

Original thread: https://www.reddit.com/r/MachineLearning/comments/1s387tx/d_icml_2026_policy_a_vs_policy_b_impact_on_scores/
Poll: https://docs.google.com/forms/d/e/1FAIpQLSdQilhiCx_dGLgx0tMVJ1NDX1URdJoUGIscFoPCpe6qE2Ph8w/viewform?usp=header

The goal was not to prove causality. It was simply to collect a rough community snapshot and see whether there are any visible trends in:

  • reported average scores,
  • reported reviewer confidence,
  • whether scores felt harsher than expected,
  • and whether reviews felt especially polished.

Now, before rebuttal scores, I wanted to share the current results from the survey.

Important disclaimer

These results are still not conclusive. This is a self-selected community poll, not an official dataset, and there are many possible sources of bias. So please read this as descriptive, preliminary data, not as proof that one policy caused better or worse outcomes. Still, with 100 responses after one week, I think the data are now interesting enough to at least discuss.

Sample size

  • 100 total submissions
  • 99 submissions with a valid average score
  • 91 submissions with a valid average confidence

By policy:

  • Policy A: 59 responses
  • Policy B: 41 responses

Summary table

Policy Responses Mean Score Score SD Mean Confidence Confidence Responses
Policy A 59 3.26 0.50 3.53 55
Policy B 41 3.43 0.63 3.35 36
Total 100 3.33* 0.56* 3.46** 91

* based on 99 valid average score entries
** based on 91 valid confidence entries

Plot 1: score distribution by policy

Distribution of Scores by Policy chosen

First patterns I see:

1) Policy B currently has a somewhat higher reported mean score

At the moment, the average reported score is higher for Policy B (3.43) than for Policy A (3.26). This is not conclusive that Policy B was advantaged in a causal sense. But the difference is visible enough that it seems worth discussing.

2) Policy A currently has higher reported reviewer confidence

Interestingly, the confidence pattern goes in the opposite direction: the average reported reviewer confidence is higher for Policy A (3.53) than for Policy B (3.35). To me, this inversely proportional relationship of scores and confidence is one of the more interesting patterns in the current data which can be intepreted as people that rely on reasoning externally (in this case LLM) are less confident on their opinion because maybe they did not fully spend time reading the paper. At the same time they are more skeptical that their review is valid.

3) Both groups lean toward “harsher than expected”, but this is stronger for Policy A

Policy Harsher than expected About as expected More lenient than expected
Policy A 67.8% 28.8% 3.4%
Policy B 58.5% 29.3% 12.2%

So both groups lean toward the feeling that scores were harsher than expected, but this is more pronounced for Policy A in the current sample. This, however, can also be attributed to the lower mean scores of Policy A, which subjectively makes the Policy A respondents feel unfairly treated.

Plot 3: perceived harshness by policy

Distribution of Harshness by policy.

4) “Especially polished” reviews are reported much more often for Policy B

Policy No Somewhat Yes
Policy A 37.3% 49.2% 13.6%
Policy B 31.7% 36.6% 31.7%

The biggest difference here is the “Yes” category: in the current sample, respondents under Policy B are much more likely to describe the reviews as especially polished. Of course, this does not prove LLM use, and I do not want to overstate that point. But it is still a pattern that seems relevant to the original debate.

My current interpretation

My current reading is:

  • there is some tendency toward higher reported scores under Policy B,
  • there is some tendency toward higher reported reviewer confidence under Policy A,
  • and there is a noticeable difference in how often reviews are described as especially polished, with that being reported more often for Policy B.

At the same time, I do not say these data justify a strong conclusion like:

  • “Policy B clearly had an unfair advantage”, or
  • “LLMs caused score inflation”.

But they justify an open debate.

There are too many confounders, however:

  • the survey is self-selected,
  • people who care about this issue are people that feel affected and are more likely to respond,
  • and different subfields / paper strengths / reviewer pools may all matter.

I would really like opinions on these early outcomes

Also, if you have not filled the survey yet, please do. And please share it, especially with people under both policies, so the sample can become larger, more informative, and more representative. If enough additional responses come in, I can post a follow-up after rebuttal as well.

Motivation

I openly admit that my motivations for doing this survey was A) I initially felt potentially treated unfairly and wanted to know the reality; and B) I really love Data Analysis of any kind and Debates. After a week I mainly do it for motivation B.

submitted by /u/Available_Net_6429
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#generative AI for data analysis
#Excel alternatives for data analysis
#natural language processing for spreadsheets
#real-time data collaboration
#conversational data analysis
#data analysis tools
#big data management in spreadsheets
#rows.com
#financial modeling with spreadsheets
#intelligent data visualization
#data visualization tools
#enterprise data management
#big data performance
#data cleaning solutions
#real-time collaboration
#self-service analytics tools
#cloud-based spreadsheet applications
#self-service analytics
#google sheets
#large dataset processing