26,877 research outputs found

    Active learning based laboratory towards engineering education 4.0

    Get PDF
    Universities have a relevant and essential key role to ensure knowledge and development of competencies in the current fourth industrial revolution called Industry 4.0. The Industry 4.0 promotes a set of digital technologies to allow the convergence between the information technology and the operation technology towards smarter factories. Under such new framework, multiple initiatives are being carried out worldwide as response of such evolution, particularly, from the engineering education point of view. In this regard, this paper introduces the initiative that is being carried out at the Technical University of Catalonia, Spain, called Industry 4.0 Technologies Laboratory, I4Tech Lab. The I4Tech laboratory represents a technological environment for the academic, research and industrial promotion of related technologies. First, in this work, some of the main aspects considered in the definition of the so called engineering education 4.0 are discussed. Next, the proposed laboratory architecture, objectives as well as considered technologies are explained. Finally, the basis of the proposed academic method supported by an active learning approach is presented.Postprint (published version

    Impersonal efficiency and the dangers of a fully automated securities exchange

    Get PDF
    This report identifies impersonal efficiency as a driver of market automation during the past four decades, and speculates about the future problems it might pose. The ideology of impersonal efficiency is rooted in a mistrust of financial intermediaries such as floor brokers and specialists. Impersonal efficiency has guided the development of market automation towards transparency and impersonality, at the expense of human trading floors. The result has been an erosion of the informal norms and human judgment that characterize less anonymous markets. We call impersonal efficiency an ideology because we do not think that impersonal markets are always superior to markets built on social ties. This report traces the historical origins of this ideology, considers the problems it has already created in the recent Flash Crash of 2010, and asks what potential risks it might pose in the future

    On Human Predictions with Explanations and Predictions of Machine Learning Models: A Case Study on Deception Detection

    Full text link
    Humans are the final decision makers in critical tasks that involve ethical and legal concerns, ranging from recidivism prediction, to medical diagnosis, to fighting against fake news. Although machine learning models can sometimes achieve impressive performance in these tasks, these tasks are not amenable to full automation. To realize the potential of machine learning for improving human decisions, it is important to understand how assistance from machine learning models affects human performance and human agency. In this paper, we use deception detection as a testbed and investigate how we can harness explanations and predictions of machine learning models to improve human performance while retaining human agency. We propose a spectrum between full human agency and full automation, and develop varying levels of machine assistance along the spectrum that gradually increase the influence of machine predictions. We find that without showing predicted labels, explanations alone slightly improve human performance in the end task. In comparison, human performance is greatly improved by showing predicted labels (>20% relative improvement) and can be further improved by explicitly suggesting strong machine performance. Interestingly, when predicted labels are shown, explanations of machine predictions induce a similar level of accuracy as an explicit statement of strong machine performance. Our results demonstrate a tradeoff between human performance and human agency and show that explanations of machine predictions can moderate this tradeoff.Comment: 17 pages, 19 figures, in Proceedings of ACM FAT* 2019, dataset & demo available at https://deception.machineintheloop.co

    Misplaced Trust: Measuring the Interference of Machine Learning in Human Decision-Making

    Full text link
    ML decision-aid systems are increasingly common on the web, but their successful integration relies on people trusting them appropriately: they should use the system to fill in gaps in their ability, but recognize signals that the system might be incorrect. We measured how people's trust in ML recommendations differs by expertise and with more system information through a task-based study of 175 adults. We used two tasks that are difficult for humans: comparing large crowd sizes and identifying similar-looking animals. Our results provide three key insights: (1) People trust incorrect ML recommendations for tasks that they perform correctly the majority of the time, even if they have high prior knowledge about ML or are given information indicating the system is not confident in its prediction; (2) Four different types of system information all increased people's trust in recommendations; and (3) Math and logic skills may be as important as ML for decision-makers working with ML recommendations.Comment: 10 page
    corecore