11,418 research outputs found

    Conversations on Empathy

    Get PDF
    In the aftermath of a global pandemic, amidst new and ongoing wars, genocide, inequality, and staggering ecological collapse, some in the public and political arena have argued that we are in desperate need of greater empathy — be this with our neighbours, refugees, war victims, the vulnerable or disappearing animal and plant species. This interdisciplinary volume asks the crucial questions: How does a better understanding of empathy contribute, if at all, to our understanding of others? How is it implicated in the ways we perceive, understand and constitute others as subjects? Conversations on Empathy examines how empathy might be enacted and experienced either as a way to highlight forms of otherness or, instead, to overcome what might otherwise appear to be irreducible differences. It explores the ways in which empathy enables us to understand, imagine and create sameness and otherness in our everyday intersubjective encounters focusing on a varied range of "radical others" – others who are perceived as being dramatically different from oneself. With a focus on the importance of empathy to understand difference, the book contends that the role of empathy is critical, now more than ever, for thinking about local and global challenges of interconnectedness, care and justice

    Towards Explainable Visual Anomaly Detection

    Full text link
    Anomaly detection and localization of visual data, including images and videos, are of great significance in both machine learning academia and applied real-world scenarios. Despite the rapid development of visual anomaly detection techniques in recent years, the interpretations of these black-box models and reasonable explanations of why anomalies can be distinguished out are scarce. This paper provides the first survey concentrated on explainable visual anomaly detection methods. We first introduce the basic background of image-level anomaly detection and video-level anomaly detection, followed by the current explainable approaches for visual anomaly detection. Then, as the main content of this survey, a comprehensive and exhaustive literature review of explainable anomaly detection methods for both images and videos is presented. Finally, we discuss several promising future directions and open problems to explore on the explainability of visual anomaly detection

    Using a Novel Hybrid Krill Herd and Bat based Recurrent Replica to Estimate the Sentiment Values of Twitter based Political Data

    Get PDF
    Big data is an essential part of the world since it is directly applicable to many functions. Twitter is an essential social network or big data replicating political information. However, big data sentiment analysis in opinion mining is challenging for complex information. In this approach, the Twitter-based political datasets are taken as input. Furthermore, the sentiment analysis of twitter-based political multilingual datasets like Hindi and English is not easy because of the complicated data. Therefore, this paper introduces a novel Hybrid Krill Herd and Bat-based Recurrent Replica (HKHBRR) to evaluate the sentiment values of twitter-based political data. Here, the fitness functions of the krill herd and bat optimization model are initialized in the dense layer to enhance the accuracy, precision, etc., and also reduce the error rate. Initially, Twitter-based political datasets are taken as input, and these collected datasets are also trained to this proposed approach. Moreover, the proposed deep learning technique is implemented in the Python framework. Thus, the outcomes of the developed model are compared with existing techniques and have attained the finest results of 98.68% accuracy and 0.5% error

    Systematic Fault Injection Scenario Generation for the Safety Monitoring of the Autonomous Vehicle

    Get PDF
    Department of Mechanical EngineeringThe Object and Event Detection and Response (OEDR) assessment of Automated Vehicles(AVs) must be thoroughly conducted on the entire Operational Design Domain(ODD) to prevent any potential safety risk caused by corner cases. In response to these challenges, AVs must be tested over hundreds of millions of kilometers before deployment to convince its OEDR capabilities. However, claiming safety through years of testing on the entire ODD is not practically sound. Therefore, many studies have addressed this problem, focusing on efficiently and effectively finding corner cases within high-fidelity simulation environment. In particular, one of key OEDR functionalities is a collision risk assessment system alarming the driver about an impending collision in advance. In AV ODD context, the collision risk assessment is confronting challenging situations such as incorrect sensor information and unexpected algorithmic errors derived from uncertain environments (weather, traffic flow, road conditions, obstacles). Whereas the widely employed collision risk assessment methods relies on the first principle, e.g., Time-To-Collision (TTC), the aforementioned situations cannot be properly assessed without appropriate scene understanding toward the each situation. To this end, AI-based research that leverages previous experience and sensor information (especially camera image) to assess collision risk through visual cues has been developed in recent years. Inspired by the above research trends, this paper aims to develop: 1) systematic corner case generation using a scenario-based falsification simulationand 2) an AI-based safety monitoring system applicable in complex driving scenarios. The implemented simulation is shown to competently find the corner case scenarios, through which the developed system is validated that it can be used as an alternative to an existing collision risk indicator in complex AV driving scenarios.ope

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Transfer Success on the Linda Problem: A Re-Examination Using Dual Process Theory, Learning Material Characteristics, and Individual Differences

    Get PDF
    The Linda problem is an intensely studied task in the literature for judgments where participants judge the probability of various options and frequently make biased judgements known as conjunction errors. Here, I conceptually replicated and extended the finding by Agnoli and Krantz (1989) that when participants are explicitly trained with Venn diagrams to inhibit their heuristics, successful transfer of learning is observed. I tested whether transfer success was maintained: (1) when the purpose of the training was obscured; (2) after controlling for individual differences; and (3) when learning materials did not include visual images. I successfully replicated their finding, identifying transfer success when the purpose of the training was masked and after controlling for individual differences. Furthermore, the effects of individual differences on transfer success depends on both the kind of learning material used and whether the purpose was masked. Hence, these findings support claims that education can inhibit biases

    TeamSTEPPS and Organizational Culture

    Get PDF
    Patient safety issues remain despite several strategies developed for their deterrence. While many safety initiatives bring about improvement, they are repeatedly unsustainable and short-lived. The index hospital’s goal was to build an organizational culture within a groundwork that improves teamwork and continuing healthcare team engagement. Teamwork influences the efficiency of patient care, patient safety, and clinical outcomes, as it has been identified as an approach for enhancing collaboration, decreasing medical errors, and building a culture of safety in healthcare. The facility implemented Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS), an evidence-based framework which was used for team training to produce valuable and needed changes, facilitating modification of organizational culture, increasing patient safety compliance, or solving particular issues. This study aimed to identify the correlation between TeamSTEPPS enactment and improved organizational culture in the ambulatory care nursing department of a New York City public hospital

    Machine Learning Approaches for the Prioritisation of Cardiovascular Disease Genes Following Genome- wide Association Study

    Get PDF
    Genome-wide association studies (GWAS) have revealed thousands of genetic loci, establishing itself as a valuable method for unravelling the complex biology of many diseases. As GWAS has grown in size and improved in study design to detect effects, identifying real causal signals, disentangling from other highly correlated markers associated by linkage disequilibrium (LD) remains challenging. This has severely limited GWAS findings and brought the method’s value into question. Although thousands of disease susceptibility loci have been reported, causal variants and genes at these loci remain elusive. Post-GWAS analysis aims to dissect the heterogeneity of variant and gene signals. In recent years, machine learning (ML) models have been developed for post-GWAS prioritisation. ML models have ranged from using logistic regression to more complex ensemble models such as random forests and gradient boosting, as well as deep learning models (i.e., neural networks). When combined with functional validation, these methods have shown important translational insights, providing a strong evidence-based approach to direct post-GWAS research. However, ML approaches are in their infancy across biological applications, and as they continue to evolve an evaluation of their robustness for GWAS prioritisation is needed. Here, I investigate the landscape of ML across: selected models, input features, bias risk, and output model performance, with a focus on building a prioritisation framework that is applied to blood pressure GWAS results and tested on re-application to blood lipid traits
    corecore