4,551 research outputs found

    Discovering linear causal model from incomplete data

    Full text link
    One common drawback in algorithms for learning Linear Causal Models is that they can not deal with incomplete data set. This is unfortunate since many real problems involve missing data or even hidden variable. In this paper, based on multiple imputation, we propose a three-step process to learn linear causal models from incomplete data set. Experimental results indicate that this algorithm is better than the single imputation method (EM algorithm) and the simple list deletion method, and for lower missing rate, this algorithm can even find models better than the results from the greedy learning algorithm MLGS working in a complete data set. In addition, the method is amenable to parallel or distributed processing, which is an important characteristic for data mining in large data sets.<br /

    Efficient computational strategies to learn the structure of probabilistic graphical models of cumulative phenomena

    Full text link
    Structural learning of Bayesian Networks (BNs) is a NP-hard problem, which is further complicated by many theoretical issues, such as the I-equivalence among different structures. In this work, we focus on a specific subclass of BNs, named Suppes-Bayes Causal Networks (SBCNs), which include specific structural constraints based on Suppes' probabilistic causation to efficiently model cumulative phenomena. Here we compare the performance, via extensive simulations, of various state-of-the-art search strategies, such as local search techniques and Genetic Algorithms, as well as of distinct regularization methods. The assessment is performed on a large number of simulated datasets from topologies with distinct levels of complexity, various sample size and different rates of errors in the data. Among the main results, we show that the introduction of Suppes' constraints dramatically improve the inference accuracy, by reducing the solution space and providing a temporal ordering on the variables. We also report on trade-offs among different search techniques that can be efficiently employed in distinct experimental settings. This manuscript is an extended version of the paper "Structural Learning of Probabilistic Graphical Models of Cumulative Phenomena" presented at the 2018 International Conference on Computational Science

    Prediction and Topological Models in Neuroscience

    Get PDF
    In the last two decades, philosophy of neuroscience has predominantly focused on explanation. Indeed, it has been argued that mechanistic models are the standards of explanatory success in neuroscience over, among other things, topological models. However, explanatory power is only one virtue of a scientific model. Another is its predictive power. Unfortunately, the notion of prediction has received comparatively little attention in the philosophy of neuroscience, in part because predictions seem disconnected from interventions. In contrast, we argue that topological predictions can and do guide interventions in science, both inside and outside of neuroscience. Topological models allow researchers to predict many phenomena, including diseases, treatment outcomes, aging, and cognition, among others. Moreover, we argue that these predictions also offer strategies for useful interventions. Topology-based predictions play this role regardless of whether they do or can receive a mechanistic interpretation. We conclude by making a case for philosophers to focus on prediction in neuroscience in addition to explanation alone

    Graph Transformer for Recommendation

    Full text link
    This paper presents a novel approach to representation learning in recommender systems by integrating generative self-supervised learning with graph transformer architecture. We highlight the importance of high-quality data augmentation with relevant self-supervised pretext tasks for improving performance. Towards this end, we propose a new approach that automates the self-supervision augmentation process through a rationale-aware generative SSL that distills informative user-item interaction patterns. The proposed recommender with Graph TransFormer (GFormer) that offers parameterized collaborative rationale discovery for selective augmentation while preserving global-aware user-item relationships. In GFormer, we allow the rationale-aware SSL to inspire graph collaborative filtering with task-adaptive invariant rationalization in graph transformer. The experimental results reveal that our GFormer has the capability to consistently improve the performance over baselines on different datasets. Several in-depth experiments further investigate the invariant rationale-aware augmentation from various aspects. The source code for this work is publicly available at: https://github.com/HKUDS/GFormer.Comment: Accepted by SIGIR'202

    Mechanistic unity of the predictive mind

    Get PDF
    It is often recognized that cognitive science employs a diverse explanatory toolkit. It has also been argued that cognitive scientists should embrace this explanatory diversity rather than pursue search for some grand unificatory framework or theory. This pluralist stance dovetails with the mechanistic view of cognitive-scientific explanation. However, one recently proposed theory – based on an idea that the brain is a predictive engine – opposes the spirit of pluralism by unapologetically wearing unificatory ambitions on its sleeves. In this paper, my aim is to investigate those pretentions to elucidate what sort of unification is on offer. I challenge the idea that explanatory unification of cognitive science follows from the Free Energy Principle. I claim that if the predictive story is to provide an explanatory unification, it is rather by proposing that many distinct cognitive mechanisms fall under the same functional schema that pertains to prediction error minimization. Seen this way, the brain is not simply a predictive mechanism – it is a collection of predictive mechanisms. I also pursue a more general aim of investigating the value of unificatory power for mechanistic explanations. I argue that even though unification is not an absolute evaluative criterion for mechanistic explanations, it may play an epistemic role in evaluating the credibility of an explanation relative to its direct competitors

    Backwards is the way forward: feedback in the cortical hierarchy predicts the expected future

    Get PDF
    Clark offers a powerful description of the brain as a prediction machine, which offers progress on two distinct levels. First, on an abstract conceptual level, it provides a unifying framework for perception, action, and cognition (including subdivisions such as attention, expectation, and imagination). Second, hierarchical prediction offers progress on a concrete descriptive level for testing and constraining conceptual elements and mechanisms of predictive coding models (estimation of predictions, prediction errors, and internal models)
    • …
    corecore