85,577 research outputs found

    Utilising Provenance to Enhance Social Computation

    Get PDF
    Postprin

    A question of trust: can we build an evidence base to gain trust in systematic review automation technologies?

    Get PDF
    Background Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. Discussion We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see ā€œothersā€ in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. Conclusion We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process

    User expectations of partial driving automation capabilities and their effect on information design preferences in the vehicle

    Get PDF
    Partially automated vehicles present interface design challenges in ensuring the driver remains alert should the vehicle need to hand back control at short notice, but without exposing the driver to cognitive overload. To date, little is known about driver expectations of partial driving automation and whether this affects the information they require inside the vehicle. Twenty-five participants were presented with five partially automated driving events in a driving simulator. After each event, a semi-structured interview was conducted. The interview data was coded and analysed using grounded theory. From the results, two groupings of driver expectations were identified: High Information Preference (HIP) and Low Information Preference (LIP) drivers; between these two groups the information preferences differed. LIP drivers did not want detailed information about the vehicle presented to them, but the definition of partial automation means that this kind of information is required for safe use. Hence, the results suggest careful thought as to how information is presented to them is required in order for LIP drivers to safely using partial driving automation. Conversely, HIP drivers wanted detailed information about the system's status and driving and were found to be more willing to work with the partial automation and its current limitations. It was evident that the drivers' expectations of the partial automation capability differed, and this affected their information preferences. Hence this study suggests that HMI designers must account for these differing expectations and preferences to create a safe, usable system that works for everyone. [Abstract copyright: Copyright Ā© 2019 The Authors. Published by Elsevier Ltd.. All rights reserved.

    ATM automation: guidance on human technology integration

    Get PDF
    Ā© Civil Aviation Authority 2016Human interaction with technology and automation is a key area of interest to industry and safety regulators alike. In February 2014, a joint CAA/industry workshop considered perspectives on present and future implementation of advanced automated systems. The conclusion was that whilst no additional regulation was necessary, guidance material for industry and regulators was required. Development of this guidance document was completed in 2015 by a working group consisting of CAA, UK industry, academia and industry associations (see Appendix B). This enabled a collaborative approach to be taken, and for regulatory, industry, and workforce perspectives to be collectively considered and addressed. The processes used in developing this guidance included: review of the themes identified from the February 2014 CAA/industry workshop1; review of academic papers, textbooks on automation, incidents and accidents involving automation; identification of key safety issues associated with automated systems; analysis of current and emerging ATM regulatory requirements and guidance material; presentation of emerging findings for critical review at UK and European aviation safety conferences. In December 2015, a workshop of senior management from project partner organisations reviewed the findings and proposals. EASA were briefed on the project before its commencement, and Eurocontrol contributed through membership of the Working Group.Final Published versio

    Intelligent and adaptive tutoring for active learning and training environments

    Get PDF
    Active learning facilitated through interactive and adaptive learning environments differs substantially from traditional instructor-oriented, classroom-based teaching. We present a Web-based e-learning environment that integrates knowledge learning and skills training. How these tools are used most effectively is still an open question. We propose knowledge-level interaction and adaptive feedback and guidance as central features. We discuss these features and evaluate the effectiveness of this Web-based environment, focusing on different aspects of learning behaviour and tool usage. Motivation, acceptance of the approach, learning organisation and actual tool usage are aspects of behaviour that require different evaluation techniques to be used

    Driving automation: Learning from aviation about design philosophies

    Get PDF
    Full vehicle automation is predicted to be on British roads by 2030 (Walker et al., 2001). However, experience in aviation gives us some cause for concern for the 'drive-by-wire' car (Stanton and Marsden, 1996). Two different philosophies have emerged in aviation for dealing with the human factor: hard vs. soft automation, depending on whether the computer or the pilot has ultimate authority (Hughes and Dornheim, 1995). This paper speculates whether hard or soft automation provides the best solution for road vehicles, and considers an alternative design philosophy in vehicles of the future based on coordination and cooperation

    Mixed Initiative Systems for Human-Swarm Interaction: Opportunities and Challenges

    Full text link
    Human-swarm interaction (HSI) involves a number of human factors impacting human behaviour throughout the interaction. As the technologies used within HSI advance, it is more tempting to increase the level of swarm autonomy within the interaction to reduce the workload on humans. Yet, the prospective negative effects of high levels of autonomy on human situational awareness can hinder this process. Flexible autonomy aims at trading-off these effects by changing the level of autonomy within the interaction when required; with mixed-initiatives combining human preferences and automation's recommendations to select an appropriate level of autonomy at a certain point of time. However, the effective implementation of mixed-initiative systems raises fundamental questions on how to combine human preferences and automation recommendations, how to realise the selected level of autonomy, and what the future impacts on the cognitive states of a human are. We explore open challenges that hamper the process of developing effective flexible autonomy. We then highlight the potential benefits of using system modelling techniques in HSI by illustrating how they provide HSI designers with an opportunity to evaluate different strategies for assessing the state of the mission and for adapting the level of autonomy within the interaction to maximise mission success metrics.Comment: Author version, accepted at the 2018 IEEE Annual Systems Modelling Conference, Canberra, Australi

    A proposed psychological model of driving automation

    Get PDF
    This paper considers psychological variables pertinent to driver automation. It is anticipated that driving with automated systems is likely to have a major impact on the drivers and a multiplicity of factors needs to be taken into account. A systems analysis of the driver, vehicle and automation served as the basis for eliciting psychological factors. The main variables to be considered were: feed-back, locus of control, mental workload, driver stress, situational awareness and mental representations. It is expected that anticipating the effects on the driver brought about by vehicle automation could lead to improved design strategies. Based on research evidence in the literature, the psychological factors were assembled into a model for further investigation

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page
    • ā€¦
    corecore