25,999 research outputs found

    Impact of noise on a dynamical system: prediction and uncertainties from a swarm-optimized neural network

    Get PDF
    In this study, an artificial neural network (ANN) based on particle swarm optimization (PSO) was developed for the time series prediction. The hybrid ANN+PSO algorithm was applied on Mackey--Glass chaotic time series in the short-term x(t+6)x(t+6). The performance prediction was evaluated and compared with another studies available in the literature. Also, we presented properties of the dynamical system via the study of chaotic behaviour obtained from the predicted time series. Next, the hybrid ANN+PSO algorithm was complemented with a Gaussian stochastic procedure (called {\it stochastic} hybrid ANN+PSO) in order to obtain a new estimator of the predictions, which also allowed us to compute uncertainties of predictions for noisy Mackey--Glass chaotic time series. Thus, we studied the impact of noise for several cases with a white noise level (σN\sigma_{N}) from 0.01 to 0.1.Comment: 11 pages, 8 figure

    Learning to Discover Sparse Graphical Models

    Get PDF
    We consider structure discovery of undirected graphical models from observational data. Inferring likely structures from few examples is a complex task often requiring the formulation of priors and sophisticated inference procedures. Popular methods rely on estimating a penalized maximum likelihood of the precision matrix. However, in these approaches structure recovery is an indirect consequence of the data-fit term, the penalty can be difficult to adapt for domain-specific knowledge, and the inference is computationally demanding. By contrast, it may be easier to generate training samples of data that arise from graphs with the desired structure properties. We propose here to leverage this latter source of information as training data to learn a function, parametrized by a neural network that maps empirical covariance matrices to estimated graph structures. Learning this function brings two benefits: it implicitly models the desired structure or sparsity properties to form suitable priors, and it can be tailored to the specific problem of edge structure discovery, rather than maximizing data likelihood. Applying this framework, we find our learnable graph-discovery method trained on synthetic data generalizes well: identifying relevant edges in both synthetic and real data, completely unknown at training time. We find that on genetics, brain imaging, and simulation data we obtain performance generally superior to analytical methods
    corecore