1 min readfrom Machine Learning

[R] Which LLMs are actually best for bleeding-edge Linux/ML debugging workflows in 2026? [R]

I’m trying to optimize an AI workflow for bleeding-edge Linux/ML debugging (Arch/CachyOS, CUDA, Python, unsloth, etc.).

Current stack:

- Claude = deep reasoning/mastermind

- Gemini 3.1 Pro = execution/logistics

- Perplexity = retrieval

Main problem: Gemini often gives high-friction or impractical fixes and degrades badly in long troubleshooting sessions. Example: suggested a long Podman workflow for an unsloth/Python issue where micromamba solved it much faster.

I also have access to hosted open models:

- Qwen 3 Coder 30B

- Qwen 3.5 122B

- Mistral Large 675B

- DeepSeek R1 Distill 70B

etc.

Question:

For people doing real-world Linux/ML/debugging workflows (not benchmarks), what currently works best as the “execution/logistics” model with strong web/recent-ecosystem awareness?

I care more about:

- practical fixes

- low friction

- stable long sessions

- debugging quality

than benchmark scores.

submitted by /u/minaco5mko
[link] [comments]

Want to read more?

Check out the full article on the original site

View original article

Tagged with

#natural language processing for spreadsheets
#generative AI for data analysis
#Excel alternatives for data analysis
#workflow automation
#rows.com
#automation in spreadsheet workflows
#large dataset processing
#real-time data collaboration
#financial modeling with spreadsheets
#real-time collaboration
#Linux
#ML
#debugging
#practical fixes
#low friction
#workflow
#execution
#logistics
#Python
#stable sessions