How are you helping your company understanding the limitations of AI derived data?
From my perspective, one of the biggest challenges of data science as a field right now is the tension between:
A) AI can give "pretty good" answers extremely fast and democratizes it
B) Those answers are often decent, but could be nontrivially "wrong"
C) That "wrongness" is often not exposed for months or years
That is, AI fully democratizes "getting a number" to our biz stakeholders across just about any business problem. A lot of times that number is off some but still pretty good and useful, but we all know sometimes it's catastrophically wrong. However, even in those worse cases though, there's a pressure to move fast, and so the consequences of that wrong number are not eaten or discovered until a good while later (when you find out a prediction was wrong retro-actively, when flaws in a matching process are discovered, when it turns out to have been the wrong "data-informed" decision, etc etc).
This is exacerbated by seemingly a lot of biz users either not understanding, or simply not caring, that "number could be wrong". That's not helped by perverse incentive structures either.
So my questions is - what, if anything, are you doing at your company to help stakeholders understand that? Or more importantly, to help build a culture that takes the scenario more responsibly?
(yes yes, there's maybe not much we can do about it. CEO whims and all that. But interested in what steps people are taking pro-actively)
[link] [comments]
Want to read more?
Check out the full article on the original site