8,947 research outputs found
xxAI - Beyond Explainable AI
This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
xxAI - Beyond Explainable AI
This is an open access book.
Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans.
Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed.
After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science.https://digitalcommons.unomaha.edu/isqafacbooks/1000/thumbnail.jp
Does Explainable Artificial Intelligence Improve Human Decision-Making?
Explainable AI provides insight into the "why" for model predictions,
offering potential for users to better understand and trust a model, and to
recognize and correct AI predictions that are incorrect. Prior research on
human and explainable AI interactions has focused on measures such as
interpretability, trust, and usability of the explanation. Whether explainable
AI can improve actual human decision-making and the ability to identify the
problems with the underlying model are open questions. Using real datasets, we
compare and evaluate objective human decision accuracy without AI (control),
with an AI prediction (no explanation), and AI prediction with explanation. We
find providing any kind of AI prediction tends to improve user decision
accuracy, but no conclusive evidence that explainable AI has a meaningful
impact. Moreover, we observed the strongest predictor for human decision
accuracy was AI accuracy and that users were somewhat able to detect when the
AI was correct versus incorrect, but this was not significantly affected by
including an explanation. Our results indicate that, at least in some
situations, the "why" information provided in explainable AI may not enhance
user decision-making, and further research may be needed to understand how to
integrate explainable AI into real systems
xxAI - Beyond Explainable AI
This is an open access book. Statistical machine learning (ML) has triggered a renaissance of artificial intelligence (AI). While the most successful ML models, including Deep Neural Networks (DNN), have developed better predictivity, they have become increasingly complex, at the expense of human interpretability (correlation vs. causality). The field of explainable AI (xAI) has emerged with the goal of creating tools and models that are both predictive and interpretable and understandable for humans. Explainable AI is receiving huge interest in the machine learning and AI research communities, across academia, industry, and government, and there is now an excellent opportunity to push towards successful explainable AI applications. This volume will help the research community to accelerate this process, to promote a more systematic use of explainable AI to improve models in diverse applications, and ultimately to better understand how current explainable AI methods need to be improved and what kind of theory of explainable AI is needed. After overviews of current methods and challenges, the editors include chapters that describe new developments in explainable AI. The contributions are from leading researchers in the field, drawn from both academia and industry, and many of the chapters take a clear interdisciplinary approach to problem-solving. The concepts discussed include explainability, causability, and AI interfaces with humans, and the applications include image processing, natural language, law, fairness, and climate science
Explainable AI is Dead, Long Live Explainable AI! Hypothesis-driven decision support
In this paper, we argue for a paradigm shift from the current model of
explainable artificial intelligence (XAI), which may be counter-productive to
better human decision making. In early decision support systems, we assumed
that we could give people recommendations and that they would consider them,
and then follow them when required. However, research found that people often
ignore recommendations because they do not trust them; or perhaps even worse,
people follow them blindly, even when the recommendations are wrong.
Explainable artificial intelligence mitigates this by helping people to
understand how and why models give certain recommendations. However, recent
research shows that people do not always engage with explainability tools
enough to help improve decision making. The assumption that people will engage
with recommendations and explanations has proven to be unfounded. We argue this
is because we have failed to account for two things. First, recommendations
(and their explanations) take control from human decision makers, limiting
their agency. Second, giving recommendations and explanations does not align
with the cognitive processes employed by people making decisions. This position
paper proposes a new conceptual framework called Evaluative AI for explainable
decision support. This is a machine-in-the-loop paradigm in which decision
support tools provide evidence for and against decisions made by people, rather
than provide recommendations to accept or reject. We argue that this mitigates
issues of over- and under-reliance on decision support tools, and better
leverages human expertise in decision making
- …