(How) could an ARC-3 solution be a threat? [D]
![(How) could an ARC-3 solution be a threat? [D]](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2Fa386xz3pojyg1.png%3Fwidth%3D140%26height%3D57%26auto%3Dwebp%26s%3D423fb80b095e8a3341c2712fd6c855dc347e6bee&w=3840&q=75)
| As many of you might be aware, the ARC-AGI-3 competition has just started ... (In case you're not familiar: it's a human/AI benchmark designed to see what AI still struggles with, that humans solve with ease - basically trying to push AI research to focus on new ideas that make AI think more human-like, assuming that that's what is required to solve such tasks, you could read more in their docs...) Seeing as the benchmark has so far only been solved at 0.68%, I was wondering what a real solution would look like: If a system has to explore and collect data, infer rules and patterns, decide which are useful, and then establish a set of rules and apply them, it seems that it such a system/algorithm would do essentially what a successful scientist would do. Apart from it being quite unrealistic in very near future, I do think that such a model (that achieves ~100% on arc-3), if open sourced (which is a condition to win the competition), would hold great potential for dangerous application, such as the military (engineering weapons), cybersecurity, manipulation, etc... Do you agree? [link] [comments] |
Want to read more?
Check out the full article on the original site