18,141 research outputs found
Use and Communication of Probabilistic Forecasts
Probabilistic forecasts are becoming more and more available. How should they
be used and communicated? What are the obstacles to their use in practice? I
review experience with five problems where probabilistic forecasting played an
important role. This leads me to identify five types of potential users: Low
Stakes Users, who don't need probabilistic forecasts; General Assessors, who
need an overall idea of the uncertainty in the forecast; Change Assessors, who
need to know if a change is out of line with expectatations; Risk Avoiders, who
wish to limit the risk of an adverse outcome; and Decision Theorists, who
quantify their loss function and perform the decision-theoretic calculations.
This suggests that it is important to interact with users and to consider their
goals. The cognitive research tells us that calibration is important for trust
in probability forecasts, and that it is important to match the verbal
expression with the task. The cognitive load should be minimized, reducing the
probabilistic forecast to a single percentile if appropriate. Probabilities of
adverse events and percentiles of the predictive distribution of quantities of
interest seem often to be the best way to summarize probabilistic forecasts.
Formal decision theory has an important role, but in a limited range of
applications
Response Criterion Placement Modulates the Effects of Graded Alerting Systems on Human Performance and Learning in a Target Detection Task
Human operators can perform better with the use of an automated diagnostic aid than without the use of an aid in a signal detection task. This experiment aimed to determine whether any differences existed among graded aidsâautomated diagnostic aids that use a scale of confidence levels reflecting a spectrum of probabilistic information or uncertainty when making a judgmentâthat enabled better human detection performance, and either binary or graded aid produced better learning. Participants performed a visual search framed as a medical decision making task. Stimuli were arrays of random polygons (âcellsâ) generated by distorting a prototype shape. The target was a shape more strongly distorted than the accompanying distracters. A target was present on half of the trials. Each participant performed the task with the assistance of either a binary aid, one of three graded aids, or no aid. The aidsâ sensitivities were the same (dâČ = 2); the difference between the aids lay in the placement of their decision criteria, which determines a tradeoff between the aidâs predictive value and the frequency with which it makes a diagnosis. The graded aid with 90% reliability provided a judgment on the greatest number of trials, the graded aid with 94% reliability gave a judgment on fewer trials, and the third graded aid with 96% reliability gave a judgment on the least number of trials. The binary aid with 84% reliability gave a judgment on each trial. All aids improved human detection performance, though the graded aids trended towards improving performance more than the binary aid. The binary and graded aids did not produce significantly better or worse learning than did unaided performance. The binary and graded aids did not significantly help learning, but they certainly did not worsen human detection performance when compared to the unaided condition. These results imply that the decision boundaries of a graded alert might be fixed to encourage appropriate reliance on the aid and improve human detection performance, and indicate employing either a graded or binary automated aid may be beneficial to learning in a detection task
An information assistant system for the prevention of tunnel vision in crisis management
In the crisis management environment, tunnel vision is a set of bias in decision makersâ cognitive process which often leads to incorrect understanding of the real crisis situation, biased perception of information, and improper decisions. The tunnel vision phenomenon is a consequence of both the challenges in the task and the natural limitation in a human beingâs cognitive process. An information assistant system is proposed with the purpose of preventing tunnel vision. The system serves as a platform for monitoring the on-going crisis event. All information goes through the system before arrives at the user. The system enhances the data quality, reduces the data quantity and presents the crisis information in a manner that prevents or repairs the userâs cognitive overload. While working with such a system, the users (crisis managers) are expected to be more likely to stay aware of the actual situation, stay open minded to possibilities, and make proper decisions
Recommended from our members
Why Are People's Decisions Sometimes Worse with Computer Support?
In many applications of computerised decision support, a recognised source of undesired outcomes is operators' apparent over-reliance on automation. For instance, an operator may fail to react to a potentially dangerous situation because a computer fails to generate an alarm. However, the very use of terms like "over-reliance" betrays possible misunderstandings of these phenomena and their causes, which may lead to ineffective corrective action (e.g. training or procedures that do not counteract all the causes of the apparently "over-reliant" behaviour). We review relevant literature in the area of "automation bias" and describe the diverse mechanisms that may be involved in human errors when using computer support. We discuss these mechanisms, with reference to errors of omission when using "alerting systems", with the help of examples of novel counterintuitive findings we obtained from a case study in a health care application, as well as other examples from the literature
Assessing the communication gap between AI models and healthcare professionals: explainability, utility and trust in AI-driven clinical decision-making
This paper contributes with a pragmatic evaluation framework for explainable
Machine Learning (ML) models for clinical decision support. The study revealed
a more nuanced role for ML explanation models, when these are pragmatically
embedded in the clinical context. Despite the general positive attitude of
healthcare professionals (HCPs) towards explanations as a safety and trust
mechanism, for a significant set of participants there were negative effects
associated with confirmation bias, accentuating model over-reliance and
increased effort to interact with the model. Also, contradicting one of its
main intended functions, standard explanatory models showed limited ability to
support a critical understanding of the limitations of the model. However, we
found new significant positive effects which repositions the role of
explanations within a clinical context: these include reduction of automation
bias, addressing ambiguous clinical cases (cases where HCPs were not certain
about their decision) and support of less experienced HCPs in the acquisition
of new domain knowledge.Comment: supplementary information in the main pd
When Does Uncertainty Matter?: Understanding the Impact of Predictive Uncertainty in ML Assisted Decision Making
As machine learning (ML) models are increasingly being employed to assist
human decision makers, it becomes critical to provide these decision makers
with relevant inputs which can help them decide if and how to incorporate model
predictions into their decision making. For instance, communicating the
uncertainty associated with model predictions could potentially be helpful in
this regard. However, there is little to no research that systematically
explores if and how conveying predictive uncertainty impacts decision making.
In this work, we carry out user studies to systematically assess how people
respond to different types of predictive uncertainty i.e., posterior predictive
distributions with different shapes and variances, in the context of ML
assisted decision making. To the best of our knowledge, this work marks one of
the first attempts at studying this question. Our results demonstrate that
people are more likely to agree with a model prediction when they observe the
corresponding uncertainty associated with the prediction. This finding holds
regardless of the properties (shape or variance) of predictive uncertainty
(posterior predictive distribution), suggesting that uncertainty is an
effective tool for persuading humans to agree with model predictions.
Furthermore, we also find that other factors such as domain expertise and
familiarity with ML also play a role in determining how someone interprets and
incorporates predictive uncertainty into their decision making
AI-Driven Decision Support Systems in Management: Enhancing Strategic Planning and Execution
Artificial intelligence (AI) is transforming strategic decision-making processes across various industries. Organizations increasingly rely on AI-driven decision support systems that leverage massive amounts of data and real-time analytics to enable more informed planning and predictive capabilities. However, less focused research has explored the integration and impact of such tools specifically within managerial strategy and execution contexts. This study conducts qualitative and quantitative analysis on the deployment of machine learning-based recommendation systems aimed at enhancing the strategic capabilities of management teams. Results indicate that AI decision tools led to improved analytic capacities, competitive response times, and reimagined vision planning, yet also posed transparency and trust challenges around advanced automation techniques. Findings provide novel implications into AIâs emerging role in augmenting and extending higher-level organizational strategy design and enactment by key decision-makers and leaders. Future directions are discussed related to addressing responsible development issues as adoption continues accelerating
- âŠ