82,273 research outputs found

    Learning Independent Causal Mechanisms

    Full text link
    Statistical learning relies upon data sampled from a distribution, and we usually do not care what actually generated it in the first place. From the point of view of causal modeling, the structure of each distribution is induced by physical mechanisms that give rise to dependences between observables. Mechanisms, however, can be meaningful autonomous modules of generative models that make sense beyond a particular entailed data distribution, lending themselves to transfer between problems. We develop an algorithm to recover a set of independent (inverse) mechanisms from a set of transformed data points. The approach is unsupervised and based on a set of experts that compete for data generated by the mechanisms, driving specialization. We analyze the proposed method in a series of experiments on image data. Each expert learns to map a subset of the transformed data back to a reference distribution. The learned mechanisms generalize to novel domains. We discuss implications for transfer learning and links to recent trends in generative modeling.Comment: ICML 201

    Task Transfer by Preference-Based Cost Learning

    Full text link
    The goal of task transfer in reinforcement learning is migrating the action policy of an agent to the target task from the source task. Given their successes on robotic action planning, current methods mostly rely on two requirements: exactly-relevant expert demonstrations or the explicitly-coded cost function on target task, both of which, however, are inconvenient to obtain in practice. In this paper, we relax these two strong conditions by developing a novel task transfer framework where the expert preference is applied as a guidance. In particular, we alternate the following two steps: Firstly, letting experts apply pre-defined preference rules to select related expert demonstrates for the target task. Secondly, based on the selection result, we learn the target cost function and trajectory distribution simultaneously via enhanced Adversarial MaxEnt IRL and generate more trajectories by the learned target distribution for the next preference selection. The theoretical analysis on the distribution learning and convergence of the proposed algorithm are provided. Extensive simulations on several benchmarks have been conducted for further verifying the effectiveness of the proposed method.Comment: Accepted to AAAI 2019. Mingxuan Jing and Xiaojian Ma contributed equally to this wor

    Sample Efficient On-line Learning of Optimal Dialogue Policies with Kalman Temporal Differences

    No full text
    International audienceDesigning dialog policies for voice-enabled interfaces is a tailoring job that is most often left to natural language processing experts. This job is generally redone for every new dialog task because cross-domain transfer is not possible. For this reason, machine learning methods for dialog policy optimization have been investigated during the last 15 years. Especially, reinforcement learning (RL) is now part of the state of the art in this domain. Standard RL methods require to test more or less random changes in the policy on users to assess them as improvements or degradations. This is called on policy learning. Nevertheless, it can result in system behaviors that are not acceptable by users. Learning algorithms should ideally infer an optimal strategy by observing interactions generated by a non-optimal but acceptable strategy, that is learning off-policy. In this contribution, a sample-efficient, online and off-policy reinforcement learning algorithm is proposed to learn an optimal policy from few hundreds of dialogues generated with a very simple handcrafted policy

    Counterfactual inference to predict causal knowledge graph for relational transfer learning by assimilating expert knowledge --Relational feature transfer learning algorithm

    Get PDF
    Transfer learning (TL) is a machine learning (ML) method in which knowledge is transferred from the existing models of related problems to the model for solving the problem at hand. Relational TL enables the ML models to transfer the relationship networks from one domain to another. However, it has two critical issues. One is determining the proper way of extracting and expressing relationships among data features in the source domain such that the relationships can be transferred to the target domain. The other is how to do the transfer procedure. Knowledge graphs (KGs) are knowledge bases that use data and logic to graph-structured information; they are helpful tools for dealing with the first issue. The proposed relational feature transfer learning algorithm (RF-TL) embodies an extended structural equation modelling (SEM) as a method for constructing KGs. Additionally, in fields such as medicine, economics, and law related to people’s lives and property safety and security, the knowledge of domain experts is a gold standard. This paper introduces the causal analysis and counterfactual inference in the TL domain that directs the transfer procedure. Different from traditional feature-based TL algorithms like transfer component analysis (TCA) and CORelation Alignment (CORAL), RF-TL not only considers relations between feature items but also utilizes causality knowledge, enabling it to perform well in practical cases. The algorithm was tested on two different healthcare-related datasets — sleep apnea questionnaire study data and COVID-19 case data on ICU admission — and compared its performance with TCA and CORAL. The experimental results show that RF-TL can generate better transferred models that give more accurate predictions with fewer input features

    A foundational neural operator that continuously learns without forgetting

    Full text link
    Machine learning has witnessed substantial growth, leading to the development of advanced artificial intelligence models crafted to address a wide range of real-world challenges spanning various domains, such as computer vision, natural language processing, and scientific computing. Nevertheless, the creation of custom models for each new task remains a resource-intensive undertaking, demanding considerable computational time and memory resources. In this study, we introduce the concept of the Neural Combinatorial Wavelet Neural Operator (NCWNO) as a foundational model for scientific computing. This model is specifically designed to excel in learning from a diverse spectrum of physics and continuously adapt to the solution operators associated with parametric partial differential equations (PDEs). The NCWNO leverages a gated structure that employs local wavelet experts to acquire shared features across multiple physical systems, complemented by a memory-based ensembling approach among these local wavelet experts. This combination enables rapid adaptation to new challenges. The proposed foundational model offers two key advantages: (i) it can simultaneously learn solution operators for multiple parametric PDEs, and (ii) it can swiftly generalize to new parametric PDEs with minimal fine-tuning. The proposed NCWNO is the first foundational operator learning algorithm distinguished by its (i) robustness against catastrophic forgetting, (ii) the maintenance of positive transfer for new parametric PDEs, and (iii) the facilitation of knowledge transfer across dissimilar tasks. Through an extensive set of benchmark examples, we demonstrate that the NCWNO can outperform task-specific baseline operator learning frameworks with minimal hyperparameter tuning at the prediction stage. We also show that with minimal fine-tuning, the NCWNO performs accurate combinatorial learning of new parametric PDEs
    corecore