464,535 research outputs found

    Learning programs by learning from failures

    Full text link
    We describe an inductive logic programming (ILP) approach called learning from failures. In this approach, an ILP system (the learner) decomposes the learning problem into three separate stages: generate, test, and constrain. In the generate stage, the learner generates a hypothesis (a logic program) that satisfies a set of hypothesis constraints (constraints on the syntactic form of hypotheses). In the test stage, the learner tests the hypothesis against training examples. A hypothesis fails when it does not entail all the positive examples or entails a negative example. If a hypothesis fails, then, in the constrain stage, the learner learns constraints from the failed hypothesis to prune the hypothesis space, i.e. to constrain subsequent hypothesis generation. For instance, if a hypothesis is too general (entails a negative example), the constraints prune generalisations of the hypothesis. If a hypothesis is too specific (does not entail all the positive examples), the constraints prune specialisations of the hypothesis. This loop repeats until either (i) the learner finds a hypothesis that entails all the positive and none of the negative examples, or (ii) there are no more hypotheses to test. We introduce Popper, an ILP system that implements this approach by combining answer set programming and Prolog. Popper supports infinite problem domains, reasoning about lists and numbers, learning textually minimal programs, and learning recursive programs. Our experimental results on three domains (toy game problems, robot strategies, and list transformations) show that (i) constraints drastically improve learning performance, and (ii) Popper can outperform existing ILP systems, both in terms of predictive accuracies and learning times.Comment: Accepted for the machine learning journa

    Goal-oriented Dialogue Policy Learning from Failures

    Full text link
    Reinforcement learning methods have been used for learning dialogue policies. However, learning an effective dialogue policy frequently requires prohibitively many conversations. This is partly because of the sparse rewards in dialogues, and the very few successful dialogues in early learning phase. Hindsight experience replay (HER) enables learning from failures, but the vanilla HER is inapplicable to dialogue learning due to the implicit goals. In this work, we develop two complex HER methods providing different trade-offs between complexity and performance, and, for the first time, enabled HER-based dialogue policy learning. Experiments using a realistic user simulator show that our HER methods perform better than existing experience replay methods (as applied to deep Q-networks) in learning rate

    Robust Deep Multi-Modal Sensor Fusion using Fusion Weight Regularization and Target Learning

    Full text link
    Sensor fusion has wide applications in many domains including health care and autonomous systems. While the advent of deep learning has enabled promising multi-modal fusion of high-level features and end-to-end sensor fusion solutions, existing deep learning based sensor fusion techniques including deep gating architectures are not always resilient, leading to the issue of fusion weight inconsistency. We propose deep multi-modal sensor fusion architectures with enhanced robustness particularly under the presence of sensor failures. At the core of our gating architectures are fusion weight regularization and fusion target learning operating on auxiliary unimodal sensing networks appended to the main fusion model. The proposed regularized gating architectures outperform the existing deep learning architectures with and without gating under both clean and corrupted sensory inputs resulted from sensor failures. The demonstrated improvements are particularly pronounced when one or more multiple sensory modalities are corrupted.Comment: 8 page

    Towards Accountable AI: Hybrid Human-Machine Analyses for Characterizing System Failure

    Full text link
    As machine learning systems move from computer-science laboratories into the open world, their accountability becomes a high priority problem. Accountability requires deep understanding of system behavior and its failures. Current evaluation methods such as single-score error metrics and confusion matrices provide aggregate views of system performance that hide important shortcomings. Understanding details about failures is important for identifying pathways for refinement, communicating the reliability of systems in different settings, and for specifying appropriate human oversight and engagement. Characterization of failures and shortcomings is particularly complex for systems composed of multiple machine learned components. For such systems, existing evaluation methods have limited expressiveness in describing and explaining the relationship among input content, the internal states of system components, and final output quality. We present Pandora, a set of hybrid human-machine methods and tools for describing and explaining system failures. Pandora leverages both human and system-generated observations to summarize conditions of system malfunction with respect to the input content and system architecture. We share results of a case study with a machine learning pipeline for image captioning that show how detailed performance views can be beneficial for analysis and debugging

    SRLG inference in OSPF for improved reconvergence after failures

    Get PDF
    The ECODE FP7 project researches cognitive routing functions in future networks. We demonstrate machine learning augmented OSPF routing which infers SRLGs from network failure history. Inferred SRLGs are used to improve OSPF convergence and recovery times during subsequent (multiple) network failures

    LEARNING FROM INTERORGANIZATIONAL PRODUCT FAILURE EXPERIENCE IN THE MEDICAL DEVICE INDUSTRY

    Get PDF
    Much research examines the causes of product failures such as the Ford Pinto gas tank design. Research also examines the consequences of product failures such as new product introductions resulting from the need to improve failed products. However, little is known about how the causes and consequences of product failures interact across different firms, and generate inter-organizational learning, within the same industry. Specifically, limited research has examined if a firm learns to reduce its own annual rate of product failures (e.g., experiences fewer product-related adverse events) by attending to the product failures and new product introductions of its competitors. In addition, we also do not know (1) how delayed reporting of product failure influences interorganizational learning, and (2) how the introduction of new products by one company impacts another firm’s effort to learn from this competitor’s product failures. To address these gaps, this dissertation develops and tests relationships between (1) inter-organizational learning from product failures, (2) product failure reporting delays, and (3) new product introductions. Regression analysis of 98,576 manufacturing firm-year observations from the medical device industry over a ten-year period (1998 to 2008) supports the proposed model. Specifically, the analysis supported two insights: (1) As expected, a competitor’s reporting delays can inhibit learning from others’ failures by increasing the chance of making poor inferences about the failure. Unexpectedly, however, delays can also improve inter-organizational learning because in reports that have taken longer to file, a clearer understanding of the failure’s cause-effect relationships is developed. iii (2) As expected, a competitor\u27s new product introductions positively impact interorganizational learning by transferring knowledge of product design between firms. Unexpectedly, a competitor’s new product introductions can also negatively impact inter-organizational learning from product failure by distracting the observing firm’s attention away from the competitor’s failures. The thesis contributes to the inter-organizational learning literature by: (1) modelling learning from others’ product failures, (2) highlighting the effects of reporting delays, and (3) showing how others’ new product introductions can distract. This thesis shows that learning from others’ product failures and new product introductions has significant benefits because it prevents serious injury and death among device users

    Death is not a success: reflections on business exit

    Get PDF
    This article is a critical evaluation of claims that business exits should not be seen as failures, on the grounds that may constitute voluntary liquidation, or because they are learning opportunities. This can be seen as further evidence of bias affecting entrepreneurship research, where failures are repackaged as successes. This article reiterates that the majority of business exits are unsuccessful. Drawing on ideas from the organisational life course, it is suggested that business ‘death’ is a suitable term for describing business closure. Even cases of voluntary ‘harvest liquidation’ such as retirement can be meaningfully described as business deaths

    Learning from Failures in Venture Philanthropy and Social Investment

    Get PDF
    EVPA (European Venture Philanthropy Association) has collected the experiences and lessons from twelve of its member organisations with a view to sharing their learnings from failure with other practitioners. The report looks at strategies and investments that failed and the reasons they did so. Outlining risk mitigation strategies, the report focuses on overcoming risk in the strategy of the venture philanthropy and/or social investment organisation and in the execution of specific investments
    • 

    corecore