14 research outputs found

    Absence of DOA Effect but No Proper Test of the Lumberjack Effect

    Get PDF
    Objective: The aim was to evaluate the relevance of the critique offered by Jamieson and Skraaning (2019) regarding the applicability of the lumberjack effect of human–automation interaction to complex real-world settings. Background: The lumberjack effect, based upon a meta-analysis, identifies the consequences of a higher degree of automation—to improve performance and reduce workload—when automation functions as intended, but to degrade performance more, as mediated by a loss of situation awareness (SA) when automation fails. Jamieson and Skraaning provide data from a process control scenario that they assert contradicts the effect. Approach: We analyzed key aspects of their simulation, measures, and results which we argue limit the strength of their conclusion that the lumberjack effect is not applicable to complex real-world systems. Results: Our analysis revealed limits in their inappropriate choice of automation, the lack of a routine performance measure, support for the lumberjack effect that was actually provided by subjective measures of the operators, an inappropriate assessment of SA, and a possible limitation of statistical power. Conclusion: We regard these limitations as reasons to temper the strong conclusions drawn by the authors, of no applicability of the lumberjack effect to complex environments. Their findings should be used as an impetus for conducting further research on human–automation interaction in these domains. Applications: The collective findings of both Jamieson and Skraaning and our study are applicable to system designers and users in deciding upon the appropriate level of automation to deploy.Peer Reviewe

    INDUSTRY 4.0:SOCIAL CHALLENGES AND RISKS

    Get PDF
    Industry 4.0 is a term first introduced by the German government during the Hannover Messe fair in 2011 when it launched an initiative to support German industry in tackling future challenges. It refers to the 4th industrial revolution in which disruptive digital technologies, such as the Internet of Things (IoT), Internet of Everything (IoE), robotics, virtual reality (VR), and artificial intelligence (AI), are impacting industrial production.The new industrial paradigms of Industry 4.0 demand a socio-technical evolution of the human role in production systems, in which all working activities of the value chain will be performed with smart approaches.However, the automation of processes can have unpredictable effects.Nowadays, in a smart factory, the role of human operators is often only to control and supervise the automated processes. This new condition of workers brought forth a paradox: malfunctions or irregularities in the automated production process are rare but challenging.This article discusses the challenges and risks that the 4th industrial revolution is bringing to society.It introduces the concept of the Irony of Automation. This propounds that the more reliable an automated system, the less human operators have to do and, consequently, the less attention they pay to the system while it is operating.The authors go on to discuss the human-centered approach to automation, whose purpose is not necessarily to automate previously manual functions but, rather, to enhance user effectiveness and reduce errors.

    Buzz or Beep? How Mode of Alert Influences Driver Takeover Following Automation Failure

    Get PDF
    abstract: Highly automated vehicles require drivers to remain aware enough to takeover during critical events. Driver distraction is a key factor that prevents drivers from reacting adequately, and thus there is need for an alert to help drivers regain situational awareness and be able to act quickly and successfully should a critical event arise. This study examines two aspects of alerts that could help facilitate driver takeover: mode (auditory and tactile) and direction (towards and away). Auditory alerts appear to be somewhat more effective than tactile alerts, though both modes produce significantly faster reaction times than no alert. Alerts moving towards the driver also appear to be more effective than alerts moving away from the driver. Future research should examine how multimodal alerts differ from single mode, and see if higher fidelity alerts influence takeover times.Dissertation/ThesisMasters Thesis Human Systems Engineering 201

    Assessing the Value of Transparency in Recommender Systems: An End-User Perspective

    Get PDF
    Recommender systems, especially those built on machine learning, are increasing in popularity, as well as complexity and scope. Systems that cannot explain their reasoning to end-users risk losing trust with users and failing to achieve acceptance. Users demand interfaces that afford them insights into internal workings, allowing them to build appropriate mental models and calibrated trust. Building interfaces that provide this level of transparency, however, is a significant design challenge, with many design features that compete, and little empirical research to guide implementation. We investigated how end-users of recommender systems value different categories of information to help in determining what to do with computer-generated recommendations in contexts involving high risk to themselves or others. Findings will inform future design of decision support in high-criticality contexts

    Testing the Lumberjack Analogy: Automation, Situational Awareness, and Mental Workload

    Get PDF
    This study examines the effects of automation on the human user of that automation. Automation has been shown to produce a variety of benefits to employees in terms of performance and a reduction of workload, but research in this area indicates that this might be at the cost of situational awareness. This loss of situational awareness is thought to lead to “out-of-the-loop” performance effects. One way this set of effects has been explained is through the “lumberjack” analogy, which suggests these effects are related to degree of automation and automation failure. This study recreates the effects of automation on mental workload, performance, and situational awareness by altering the characteristics of automation in a UAV supervisory control environment; RESCHU was chosen because of its complexity and the ability to manipulate levels of control within the task. Afterwards, it will be discussed whether the effects align with the predictions of the lumberjack analogy. Participants were assigned to one of two automation reliability groups, routine or failure, and all participants experienced all three degrees of automation – manual/low, medium, and high. Scores collected for mental workload, situational awareness, and performance were compared across groups and conditions. Results indicated differences in performance for both degree of automation and reliability, but no interaction. There was also a main effect of degree of automation on raw NASA-TLX scores, with a few main effects reported for individual subscales

    Into the Black Box: Designing for Transparency in Artificial Intelligence

    Get PDF
    Indiana University-Purdue University Indianapolis (IUPUI)The rapid infusion of artificial intelligence into everyday technologies means that consumers are likely to interact with intelligent systems that provide suggestions and recommendations on a daily basis in the very near future. While these technologies promise much, current issues in low transparency create high potential to confuse end-users, limiting the market viability of these technologies. While efforts are underway to make machine learning models more transparent, HCI currently lacks an understanding of how these model-generated explanations should best translate into the practicalities of system design. To address this gap, my research took a pragmatic approach to improving system transparency for end-users. Through a series of three studies, I investigated the need and value of transparency to end-users, and explored methods to improve system designs to accomplish greater transparency in intelligent systems offering recommendations. My research resulted in a summarized taxonomy that outlines a variety of motivations for why users ask questions of intelligent systems; useful for considering the type and category of information users might appreciate when interacting with AI-based recommendations. I also developed a categorization of explanation types, known as explanation vectors, that is organized into groups that correspond to user knowledge goals. Explanation vectors provide system designers options for delivering explanations of system processes beyond those of basic explainability. I developed a detailed user typology, which is a four-factor categorization of the predominant attitudes and opinion schemes of everyday users interacting with AI-based recommendations; useful to understand the range of user sentiment towards AI-based recommender features, and possibly useful for tailoring interface design by user type. Lastly, I developed and tested an evaluation method known as the System Transparency Evaluation Method (STEv), which allows for real-world systems and prototypes to be evaluated and improved through a low-cost query method. Results from this dissertation offer concrete direction to interaction designers as to how these results might manifest in the design of interfaces that are more transparent to end users. These studies provide a framework and methodology that is complementary to existing HCI evaluation methods, and lay the groundwork upon which other research into improving system transparency might build

    Artificial intelligence (AI): multidisciplinary perspectives on emerging challenges, opportunities, and agenda for research and practice

    Get PDF
    As far back as the industrial revolution, great leaps in technical innovation succeeded in transforming numerous manual tasks and processes that had been in existence for decades where humans had reached the limits of physical capacity. Artificial Intelligence (AI) offers this same transformative potential for the augmentation and potential replacement of human tasks and activities within a wide range of industrial, intellectual and social applications. The pace of change for this new AI technological age is staggering, with new breakthroughs in algorithmic machine learning and autonomous decision making engendering new opportunities for continued innovation. The impact of AI is significant, with industries ranging from: finance, retail, healthcare, manufacturing, supply chain and logistics all set to be disrupted by the onset of AI technologies. The study brings together the collective insight from a number of leading expert contributors to highlight the significant opportunities, challenges and potential research agenda posed by the rapid emergence of AI within a number of domains: technological, business and management, science and technology, government and public sector. The research offers significant and timely insight to AI technology and its impact on the future of industry and society in general
    corecore