27,895 research outputs found

    Approximating predictive probabilities of Gibbs-type priors

    Get PDF
    Gibbs-type random probability measures, or Gibbs-type priors, are arguably the most "natural" generalization of the celebrated Dirichlet prior. Among them the two parameter Poisson-Dirichlet prior certainly stands out for the mathematical tractability and interpretability of its predictive probabilities, which made it the natural candidate in several applications. Given a sample of size nn, in this paper we show that the predictive probabilities of any Gibbs-type prior admit a large nn approximation, with an error term vanishing as o(1/n)o(1/n), which maintains the same desirable features as the predictive probabilities of the two parameter Poisson-Dirichlet prior.Comment: 22 pages, 6 figures. Added posterior simulation study, corrected typo

    Prediction of disease progression, treatment response and dropout in chronic obstructive pulmonary disease (COPD).

    Get PDF
    Drug development in chronic obstructive pulmonary disease (COPD) has been characterised by unacceptably high failure rates. In addition to the poor sensitivity in forced expiratory volume in one second (FEV1), numerous causes are known to contribute to this phenomenon, which can be clustered into drug-, disease- and design-related factors. Here we present a model-based approach to describe disease progression, treatment response and dropout in clinical trials with COPD patients

    The Neural Particle Filter

    Get PDF
    The robust estimation of dynamically changing features, such as the position of prey, is one of the hallmarks of perception. On an abstract, algorithmic level, nonlinear Bayesian filtering, i.e. the estimation of temporally changing signals based on the history of observations, provides a mathematical framework for dynamic perception in real time. Since the general, nonlinear filtering problem is analytically intractable, particle filters are considered among the most powerful approaches to approximating the solution numerically. Yet, these algorithms prevalently rely on importance weights, and thus it remains an unresolved question how the brain could implement such an inference strategy with a neuronal population. Here, we propose the Neural Particle Filter (NPF), a weight-less particle filter that can be interpreted as the neuronal dynamics of a recurrently connected neural network that receives feed-forward input from sensory neurons and represents the posterior probability distribution in terms of samples. Specifically, this algorithm bridges the gap between the computational task of online state estimation and an implementation that allows networks of neurons in the brain to perform nonlinear Bayesian filtering. The model captures not only the properties of temporal and multisensory integration according to Bayesian statistics, but also allows online learning with a maximum likelihood approach. With an example from multisensory integration, we demonstrate that the numerical performance of the model is adequate to account for both filtering and identification problems. Due to the weightless approach, our algorithm alleviates the 'curse of dimensionality' and thus outperforms conventional, weighted particle filters in higher dimensions for a limited number of particles

    Estimating Discrete Markov Models From Various Incomplete Data Schemes

    Full text link
    The parameters of a discrete stationary Markov model are transition probabilities between states. Traditionally, data consist in sequences of observed states for a given number of individuals over the whole observation period. In such a case, the estimation of transition probabilities is straightforwardly made by counting one-step moves from a given state to another. In many real-life problems, however, the inference is much more difficult as state sequences are not fully observed, namely the state of each individual is known only for some given values of the time variable. A review of the problem is given, focusing on Monte Carlo Markov Chain (MCMC) algorithms to perform Bayesian inference and evaluate posterior distributions of the transition probabilities in this missing-data framework. Leaning on the dependence between the rows of the transition matrix, an adaptive MCMC mechanism accelerating the classical Metropolis-Hastings algorithm is then proposed and empirically studied.Comment: 26 pages - preprint accepted in 20th February 2012 for publication in Computational Statistics and Data Analysis (please cite the journal's paper

    A statistical analysis of multiple temperature proxies: Are reconstructions of surface temperatures over the last 1000 years reliable?

    Get PDF
    Predicting historic temperatures based on tree rings, ice cores, and other natural proxies is a difficult endeavor. The relationship between proxies and temperature is weak and the number of proxies is far larger than the number of target data points. Furthermore, the data contain complex spatial and temporal dependence structures which are not easily captured with simple models. In this paper, we assess the reliability of such reconstructions and their statistical significance against various null models. We find that the proxies do not predict temperature significantly better than random series generated independently of temperature. Furthermore, various model specifications that perform similarly at predicting temperature produce extremely different historical backcasts. Finally, the proxies seem unable to forecast the high levels of and sharp run-up in temperature in the 1990s either in-sample or from contiguous holdout blocks, thus casting doubt on their ability to predict such phenomena if in fact they occurred several hundred years ago. We propose our own reconstruction of Northern Hemisphere average annual land temperature over the last millennium, assess its reliability, and compare it to those from the climate science literature. Our model provides a similar reconstruction but has much wider standard errors, reflecting the weak signal and large uncertainty encountered in this setting.Comment: Published in at http://dx.doi.org/10.1214/10-AOAS398 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Predictive information and error processing : the role of medial-frontal cortex during motor control

    No full text
    We have recently provided evidence that an error-related negativity (ERN), an ERP component generated within medial-frontal cortex, is elicited by errors made during the performance of a continuous tracking task (O.E. Krigolson & C.B. Holroyd, 2006). In the present study we conducted two experiments to investigate the ability of the medial-frontal error system to evaluate predictive error information. In two experiments participants used a joystick to perform a computer-based continuous tracking task in which some tracking errors were inevitable. In both experiments, half of these errors were preceded by a predictive cue. The results of both experiments indicated that an ERN-like waveform was elicited by tracking errors. Furthermore, in both experiments the predicted error waveforms had an earlier peak latency than the unpredicted error waveforms. These results demonstrate that the medial-frontal error system can evaluate predictive error information

    Principal component analysis for second-order stationary vector time series

    Get PDF
    We extend the principal component analysis (PCA) to second-order stationary vector time series in the sense that we seek for a contemporaneous linear transformation for a pp-variate time series such that the transformed series is segmented into several lower-dimensional subseries, and those subseries are uncorrelated with each other both contemporaneously and serially. Therefore those lower-dimensional series can be analysed separately as far as the linear dynamic structure is concerned. Technically it boils down to an eigenanalysis for a positive definite matrix. When pp is large, an additional step is required to perform a permutation in terms of either maximum cross-correlations or FDR based on multiple tests. The asymptotic theory is established for both fixed pp and diverging pp when the sample size nn tends to infinity. Numerical experiments with both simulated and real data sets indicate that the proposed method is an effective initial step in analysing multiple time series data, which leads to substantial dimension reduction in modelling and forecasting high-dimensional linear dynamical structures. Unlike PCA for independent data, there is no guarantee that the required linear transformation exists. When it does not, the proposed method provides an approximate segmentation which leads to the advantages in, for example, forecasting for future values. The method can also be adapted to segment multiple volatility processes.Comment: The original title dated back to October 2014 is "Segmenting Multiple Time Series by Contemporaneous Linear Transformation: PCA for Time Series
    • …
    corecore