19,280 research outputs found

    On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection

    Full text link
    Humans are the final decision makers in critical tasks that involve ethical and legal concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake news. Although machine learning models can sometimes achieve impressive performance in these tasks, these tasks are not amenable to full automation. To realize the potential of machine learning for improving human decisions, it is important to understand how assistance from machine learning models affects human performance and human agency. In this paper, we use deception detection as a testbed and investigate how we can harness explanations and predictions of machine learning models to improve human performance while retaining human agency. We propose a spectrum between full human agency and full automation, and develop varying levels of machine assistance along the spectrum that gradually increase the influence of machine predictions. We find that without showing predicted labels, explanations alone slightly improve human performance in the end task. In comparison, human performance is greatly improved by showing predicted labels (>20% relative improvement) and can be further improved by explicitly suggesting strong machine performance. Interestingly, when predicted labels are shown, explanations of machine predictions induce a similar level of accuracy as an explicit statement of strong machine performance. Our results demonstrate a tradeoff between human performance and human agency and show that explanations of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo available at https://deception.machineintheloop.co

    Voting Classifier for The Interactive Design with Deep Learning for Scene Theory

    Get PDF
    Tool products play a pivotal role in assisting individuals in various domains, ranging from professional work to everyday tasks. The success of these tools is not solely determined by their functionality but also by the quality of user experience they offer. Designing tool products that effectively engage users, enhance their productivity, and provide a seamless interaction experience has become a critical focus for researchers and practitioners in the field of interaction design. Scene theory proposes that individuals perceive and interpret their surroundings as dynamic "scenes," wherein environmental and situational factors influence their cognitive processes and behavior. This research paper presented a novel approach to the interaction design of tool products by integrating scene theory, flow experience, the Moth Flame optimization (MFO), cooperative game theory (CGT), and voting deep learning. Tool products play a vital role in various domains, and their interaction design significantly influences user satisfaction and task performance. Building upon the principles of scene theory and flow experience, this study proposes an innovative framework that considers the contextual factors and aims to create a seamless and enjoyable user experience. The MFO algorithm, inspired by the behavior of moth flame, is employed to optimize the design parameters and enhance the efficiency of the interaction design process. Furthermore, CGT is integrated to model cooperative relationships between users and tool products, fostering collaborative and engaging experiences. Voting deep learning is employed to analyze user feedback and preferences, enabling personalized and adaptive design recommendations. With the proposed CGT, this paper investigates the impact of the proposed approach on user engagement, task efficiency, and overall satisfaction. The findings contribute to the field of interaction design by providing practical insights for creating tool products that align with users' cognitive processes, environmental constraints, flow-inducing experiences, and cooperative dynamics

    F-formation Detection: Individuating Free-standing Conversational Groups in Images

    Full text link
    Detection of groups of interacting people is a very interesting and useful task in many modern technologies, with application fields spanning from video-surveillance to social robotics. In this paper we first furnish a rigorous definition of group considering the background of the social sciences: this allows us to specify many kinds of group, so far neglected in the Computer Vision literature. On top of this taxonomy, we present a detailed state of the art on the group detection algorithms. Then, as a main contribution, we present a brand new method for the automatic detection of groups in still images, which is based on a graph-cuts framework for clustering individuals; in particular we are able to codify in a computational sense the sociological definition of F-formation, that is very useful to encode a group having only proxemic information: position and orientation of people. We call the proposed method Graph-Cuts for F-formation (GCFF). We show how GCFF definitely outperforms all the state of the art methods in terms of different accuracy measures (some of them are brand new), demonstrating also a strong robustness to noise and versatility in recognizing groups of various cardinality.Comment: 32 pages, submitted to PLOS On
    corecore