… it’s potentially disastrous, because there’s often a tradeoff between accuracy and explainability. I would rather be diagnosed by an algorithm that is 90 percent accurate and gives no explanations than by one that is 80 percent accurate and does. Different people will make different choices in different situations. Why should the government imp…
Not necessarily. We are fast moving into an era where Humans and AI agents will be working together as teams, hence the huge interest in Human Aware AI. Which is why explanations are important because it is important for the human agents to know and understand the results of their AI agents. So if a human is working with an AI agent that is 80% accurate and with explanations, he can probably use the results of the algorithm more effectively, finish his task and maybe take the accuracy all the way up to 95%, whereas with the 90% accurate model the human will be left clueless and the macro-task may remain unaccomplished.
For the synergy of human robot collaboration explanation will be necessary.