32,265 research outputs found

    Theoretical, Measured and Subjective Responsibility in Aided Decision Making

    Full text link
    When humans interact with intelligent systems, their causal responsibility for outcomes becomes equivocal. We analyze the descriptive abilities of a newly developed responsibility quantification model (ResQu) to predict actual human responsibility and perceptions of responsibility in the interaction with intelligent systems. In two laboratory experiments, participants performed a classification task. They were aided by classification systems with different capabilities. We compared the predicted theoretical responsibility values to the actual measured responsibility participants took on and to their subjective rankings of responsibility. The model predictions were strongly correlated with both measured and subjective responsibility. A bias existed only when participants with poor classification capabilities relied less-than-optimally on a system that had superior classification capabilities and assumed higher-than-optimal responsibility. The study implies that when humans interact with advanced intelligent systems, with capabilities that greatly exceed their own, their comparative causal responsibility will be small, even if formally the human is assigned major roles. Simply putting a human into the loop does not assure that the human will meaningfully contribute to the outcomes. The results demonstrate the descriptive value of the ResQu model to predict behavior and perceptions of responsibility by considering the characteristics of the human, the intelligent system, the environment and some systematic behavioral biases. The ResQu model is a new quantitative method that can be used in system design and can guide policy and legal decisions regarding human responsibility in events involving intelligent systems

    Simulated case management of home telemonitoring to assess the impact of different alert algorithms on work-load and clinical decisions

    Get PDF
    © 2017 The Author(s). Background: Home telemonitoring (HTM) of chronic heart failure (HF) promises to improve care by timely indications when a patient's condition is worsening. Simple rules of sudden weight change have been demonstrated to generate many alerts with poor sensitivity. Trend alert algorithms and bio-impedance (a more sensitive marker of fluid change), should produce fewer false alerts and reduce workload. However, comparisons between such approaches on the decisions made and the time spent reviewing alerts has not been studied. Methods: Using HTM data from an observational trial of 91 HF patients, a simulated telemonitoring station was created and used to present virtual caseloads to clinicians experienced with HF HTM systems. Clinicians were randomised to either a simple (i.e. an increase of 2 kg in the past 3 days) or advanced alert method (either a moving average weight algorithm or bio-impedance cumulative sum algorithm). Results: In total 16 clinicians reviewed the caseloads, 8 randomised to a simple alert method and 8 to the advanced alert methods. Total time to review the caseloads was lower in the advanced arms than the simple arm (80 ± 42 vs. 149 ± 82 min) but agreements on actions between clinicians were low (Fleiss kappa 0.33 and 0.31) and despite having high sensitivity many alerts in the bio-impedance arm were not considered to need further action. Conclusion: Advanced alerting algorithms with higher specificity are likely to reduce the time spent by clinicians and increase the percentage of time spent on changes rated as most meaningful. Work is needed to present bio-impedance alerts in a manner which is intuitive for clinicians

    Participant training and its effect on actual retrospective timeframes

    Get PDF
    When rating moods (e.g., How do you feel “at this moment”), individuals employ lengthy timeframes that do not converge with the expected timeframe (Lecci & Wirth, 2006). Participants (N = 1,096) were used to validate a method referred to as participant training that increases concordance between the expected and actual amount of time sampled in a commonly employed mood assessment instrument (the PANAS) as well as terms used in mood related research. Results indicate that exposure to other time frames can help to reduce the variability in “moment” and “year” ratings and increase variability for “in general” as well as result in greater concordance between expected and actual timeframes employed by participants. Furthermore, the study examines the effects of the variability in actual retrospective timeframes on the longstanding debate on the dimensionality of affect (e.g., Watson, 1988; Diener & Emmons, 1985; Warr, Barter, & Brownbridge, 1983; Russell & Carroll, 1999, etc.). Participant training does not effect the correlation between positive and negative affect, however, the terms themselves have a significant impact on the correlation. Implications for these findings are discussed

    A Study of Realtime Summarization Metrics

    Get PDF
    Unexpected news events, such as natural disasters or other human tragedies, create a large volume of dynamic text data from official news media as well as less formal social media. Automatic real-time text summarization has become an important tool for quickly transforming this overabundance of text into clear, useful information for end-users including affected individuals, crisis responders, and interested third parties. Despite the importance of real-time summarization systems, their evaluation is not well understood as classic methods for text summarization are inappropriate for real-time and streaming conditions. The TREC 2013-2015 Temporal Summarization (TREC-TS) track was one of the first evaluation campaigns to tackle the challenges of real-time summarization evaluation, introducing new metrics, ground-truth generation methodology and dataset. In this paper, we present a study of TREC-TS track evaluation methodology, with the aim of documenting its design, analyzing its effectiveness, as well as identifying improvements and best practices for the evaluation of temporal summarization systems

    CONSTITUENT DIMENSIONS OF CUSTOMER SATISFACTION: A STUDY OF NATIONALISED AND PRIVATE BANKS

    Get PDF
    Satisfaction of the customers is invaluable asset for the modern organizations, providing unmatched competitive edge. It helps in building long-term relationship as well as brand equity. The best approach to customer retention is to deliver high level of customer satisfaction that result in, strong customer loyalty. Satisfaction being a judgment, that a product or service feature or the product or service itself, provides a pleasurable level of consumption related fulfillment, is dynamic in nature. It is the result of interplay of a number of factors, which vary from one product/service category to another. Present study is aimed at exploring the determinant factors and hence developing dimensions of customer satisfaction for nationalized and private banks. Two-stage factor analysis was computed to arrive at the dimensions of customer satisfaction. The study revealed ten factors and five dimensions of customer satisfaction for nationalized and private banks respectively.customer satisfaction, private amd public banks.
    • …
    corecore