50,692 research outputs found

    CUTS: Neural Causal Discovery from Irregular Time-Series Data

    Full text link
    Causal discovery from time-series data has been a central task in machine learning. Recently, Granger causality inference is gaining momentum due to its good explainability and high compatibility with emerging deep neural networks. However, most existing methods assume structured input data and degenerate greatly when encountering data with randomly missing entries or non-uniform sampling frequencies, which hampers their applications in real scenarios. To address this issue, here we present CUTS, a neural Granger causal discovery algorithm to jointly impute unobserved data points and build causal graphs, via plugging in two mutually boosting modules in an iterative framework: (i) Latent data prediction stage: designs a Delayed Supervision Graph Neural Network (DSGNN) to hallucinate and register unstructured data which might be of high dimension and with complex distribution; (ii) Causal graph fitting stage: builds a causal adjacency matrix with imputed data under sparse penalty. Experiments show that CUTS effectively infers causal graphs from unstructured time-series data, with significantly superior performance to existing methods. Our approach constitutes a promising step towards applying causal discovery to real applications with non-ideal observations.Comment: https://openreview.net/forum?id=UG8bQcD3Em

    A neural network and rule based system application in water demand forecasting

    Get PDF
    This thesis was submitted for the degree of Doctor of Philosophy and was awarded by Brunel University.This thesis describes a short term water demand forecasting application that is based upon a combination of a neural network forecast generator and a rule based system that modifies the resulting forecasts. Conventionally, short term forecasting of both water consumption and electrical load demand has been based upon mathematical models that aim to either extract the mathematical properties displayed by a time series of historical data, or represent the causal relationships between the level of demand and the key factors that determine that demand. These conventional approaches have been able to achieve acceptable levels of prediction accuracy for those days where distorting, non cyclic influences are not present to a significant degree. However, when such distortions are present, then the resultant decrease in prediction accuracy has a detrimental effect upon the controlling systems that are attempting to optimise the operation of the water or electricity supply network. The abnormal, non cyclic factors can be divided into those which are related to changes in the supply network itself, those that are related to particular dates or times of the year and those which are related to the prevailing meteorological conditions. If a prediction system is to provide consistently accurate forecasts then it has to be able to incorporate the effects of each of the factor types outlined above. The prediction system proposed in this thesis achieves this by the use of a neural network that by the application of appropriately classified example sets, can track the varying relationship between the level of demand and key meteorological variables. The influence of supply network changes and calendar related events are accounted for by the use of a rule base of prediction adjusting rules that are built up with reference to past occurrences of similar events. The resulting system is capable of eliminating a significant proportion of the large prediction errors that can lead to non optimal supply network operation

    Neural Networks with Non-Uniform Embedding and Explicit Validation Phase to Assess Granger Causality

    Get PDF
    A challenging problem when studying a dynamical system is to find the interdependencies among its individual components. Several algorithms have been proposed to detect directed dynamical influences between time series. Two of the most used approaches are a model-free one (transfer entropy) and a model-based one (Granger causality). Several pitfalls are related to the presence or absence of assumptions in modeling the relevant features of the data. We tried to overcome those pitfalls using a neural network approach in which a model is built without any a priori assumptions. In this sense this method can be seen as a bridge between model-free and model-based approaches. The experiments performed will show that the method presented in this work can detect the correct dynamical information flows occurring in a system of time series. Additionally we adopt a non-uniform embedding framework according to which only the past states that actually help the prediction are entered into the model, improving the prediction and avoiding the risk of overfitting. This method also leads to a further improvement with respect to traditional Granger causality approaches when redundant variables (i.e. variables sharing the same information about the future of the system) are involved. Neural networks are also able to recognize dynamics in data sets completely different from the ones used during the training phase

    Locally embedded presages of global network bursts

    Full text link
    Spontaneous, synchronous bursting of neural population is a widely observed phenomenon in nervous networks, which is considered important for functions and dysfunctions of the brain. However, how the global synchrony across a large number of neurons emerges from an initially non-bursting network state is not fully understood. In this study, we develop a new state-space reconstruction method combined with high-resolution recordings of cultured neurons. This method extracts deterministic signatures of upcoming global bursts in "local" dynamics of individual neurons during non-bursting periods. We find that local information within a single-cell time series can compare with or even outperform the global mean field activity for predicting future global bursts. Moreover, the inter-cell variability in the burst predictability is found to reflect the network structure realized in the non-bursting periods. These findings demonstrate the deterministic mechanisms underlying the locally concentrated early-warnings of the global state transition in self-organized networks

    Causal connectivity of evolved neural networks during behavior

    Get PDF
    To show how causal interactions in neural dynamics are modulated by behavior, it is valuable to analyze these interactions without perturbing or lesioning the neural mechanism. This paper proposes a method, based on a graph-theoretic extension of vector autoregressive modeling and 'Granger causality,' for characterizing causal interactions generated within intact neural mechanisms. This method, called 'causal connectivity analysis' is illustrated via model neural networks optimized for controlling target fixation in a simulated head-eye system, in which the structure of the environment can be experimentally varied. Causal connectivity analysis of this model yields novel insights into neural mechanisms underlying sensorimotor coordination. In contrast to networks supporting comparatively simple behavior, networks supporting rich adaptive behavior show a higher density of causal interactions, as well as a stronger causal flow from sensory inputs to motor outputs. They also show different arrangements of 'causal sources' and 'causal sinks': nodes that differentially affect, or are affected by, the remainder of the network. Finally, analysis of causal connectivity can predict the functional consequences of network lesions. These results suggest that causal connectivity analysis may have useful applications in the analysis of neural dynamics

    Classification-based prediction of effective connectivity between timeseries with a realistic cortical network model

    Get PDF
    Effective connectivity measures the pattern of causal interactions between brain regions. Traditionally, these patterns of causality are inferred from brain recordings using either non-parametric, i.e., model-free, or parametric, i.e., model-based, approaches. The latter approaches, when based on biophysically plausible models, have the advantage that they may facilitate the interpretation of causality in terms of underlying neural mechanisms. Recent biophysically plausible neural network models of recurrent microcircuits have shown the ability to reproduce well the characteristics of real neural activity and can be applied to model interacting cortical circuits. Unfortunately, however, it is challenging to invert these models in order to estimate effective connectivity from observed data. Here, we propose to use a classification-based method to approximate the result of such complex model inversion. The classifier predicts the pattern of causal interactions given a multivariate timeseries as input. The classifier is trained on a large number of pairs of multivariate timeseries and the respective pattern of causal interactions, which are generated by simulation from the neural network model. In simulated experiments, we show that the proposed method is much more accurate in detecting the causal structure of timeseries than current best practice methods. Additionally, we present further results to characterize the validity of the neural network model and the ability of the classifier to adapt to the generative model of the data
    • …
    corecore