7 research outputs found

    Scenario-based requirements elicitation for user-centric explainable AI

    Get PDF
    Explainable Artificial Intelligence (XAI) develops technical explanation methods and enable interpretability for human stakeholders on why Artificial Intelligence (AI) and machine learning (ML) models provide certain predictions. However, the trust of those stakeholders into AI models and explanations is still an issue, especially domain experts, who are knowledgeable about their domain but not AI inner workings. Social and user-centric XAI research states it is essential to understand the stakeholder’s requirements to provide explanations tailored to their needs, and enhance their trust in working with AI models. Scenario-based design and requirements elicitation can help bridge the gap between social and operational aspects of a stakeholder early before the adoption of information systems and identify its real problem and practices generating user requirements. Nevertheless, it is still rarely explored the adoption of scenarios in XAI, especially in the domain of fraud detection to supporting experts who are about to work with AI models. We demonstrate the usage of scenario-based requirements elicitation for XAI in a fraud detection context, and develop scenarios derived with experts in banking fraud. We discuss how those scenarios can be adopted to identify user or expert requirements for appropriate explanations in his daily operations and to make decisions on reviewing fraudulent cases in banking. The generalizability of the scenarios for further adoption is validated through a systematic literature review in domains of XAI and visual analytics for fraud detection

    The Visual Object Tracking VOT2013 challenge results

    Get PDF
    Visual tracking has attracted a significant attention in the last few decades. The recent surge in the number of publications on tracking-related problems have made it almost impossible to follow the developments in the field. One of the reasons is that there is a lack of commonly accepted annotated data-sets and standardized evaluation protocols that would allow objective comparison of different tracking methods. To address this issue, the Visual Object Tracking (VOT) workshop was organized in conjunction with ICCV2013. Researchers from academia as well as industry were invited to participate in the first VOT2013 challenge which aimed at single-object visual trackers that do not apply pre-learned models of object appearance (model-free). Presented here is the VOT2013 benchmark dataset for evaluation of single-object visual trackers as well as the results obtained by the trackers competing in the challenge. In contrast to related attempts in tracker benchmarking, the dataset is labeled per-frame by visual attributes that indicate occlusion, illumination change, motion change, size change and camera motion, offering a more systematic comparison of the trackers. Furthermore, we have designed an automated system for performing and evaluating the experiments. We present the evaluation protocol of the VOT2013 challenge and the results of a comparison of 27 trackers on the benchmark dataset. The dataset, the evaluation tools and the tracker rankings are publicly available from the challenge website (http://votchallenge. net

    The visual object tracking VOT2014 challenge results

    No full text
    The Visual Object Tracking challenge 2014, VOT2014, aims at comparing short-term single-object visual trackers that do not apply pre-learned models of object appearance. Results of 38 trackers are presented. The number of tested trackers makes VOT 2014 the largest benchmark on short-term tracking to date. For each participating tracker, a short description is provided in the appendix. Features of the VOT2014 challenge that go beyond its VOT2013 predecessor are introduced: (i) a new VOT2014 dataset with full annotation of targets by rotated bounding boxes and per-frame attribute, (ii) extensions of the VOT2013 evaluation methodology, (iii) a new unit for tracking speed assessment less dependent on the hardware and (iv) the VOT2014 evaluation toolkit that significantly speeds up execution of experiments. The dataset, the evaluation kit as well as the results are publicly available at the challenge website (http://votchallenge.net)

    The Seventh Visual Object Tracking VOT2019 Challenge Results

    No full text

    The Seventh Visual Object Tracking VOT2019 Challenge Results

    No full text
    corecore