14 research outputs found

    Advances in point process modeling: feature selection, goodness-of-fit and novel applications

    Full text link
    The research contained in this thesis extends multivariate marked point process modeling methods for neuroscience, generalizes goodness-of-fit techniques for the class of marked point processes, and introduces the use of a general history-dependent point process model to the domain of sleep apnea. Our first project involves further development of a modeling tool for spiking data from neural populations using the theory of marked point processes. This marked point process model uses features of spike waveforms as marks in order to estimate a state variable of interest. We examine the informational content of geometric features as well as principal components of the waveforms at hippocampal place cell activity by comparing decoding accuracies of a rat's position along a track. We determined that there was additional information available beyond that contained in traditional geometric features used for decoding in practice. The expanded use of this marked point process model in neuroscience necessitates corresponding goodness-of-fit protocols for the marked case. In our second project, we develop a generalized time-rescaling method for marked point processes that produces uniformly distributed spikes under a proper model. Once rescaled, the ground process then behaves as a Poisson process and can be analyzed using traditional point process goodness-of-fit methods. We demonstrate the method's ability to detect quality and manner of fit through both simulation and real neural data analysis. In the final project, we introduce history-dependent point process modeling as a superior method for characterizing severe sleep apnea over the current clinical standard known as the apnea-hypopnea index (AHI). We analyze model fits using combinations of both clinical covariates and event observations themselves through functions of history. Ultimately, apnea onset times were consistently estimated with significantly higher accuracy when history was incorporated alongside sleep stage. We present this method to the clinical audience as a means to gain detailed information on patterns of apnea and to provide more customized diagnoses and treatment prescriptions. These separate yet complementary projects extend existing point process modeling methods and further demonstrate their value in the neurosciences, sleep sciences, and beyond

    Probabilistic machine learning and artificial intelligence.

    Get PDF
    How can a machine learn from experience? Probabilistic modelling provides a framework for understanding what learning is, and has therefore emerged as one of the principal theoretical and practical approaches for designing machines that learn from data acquired through experience. The probabilistic framework, which describes how to represent and manipulate uncertainty about models and predictions, has a central role in scientific data analysis, machine learning, robotics, cognitive science and artificial intelligence. This Review provides an introduction to this framework, and discusses some of the state-of-the-art advances in the field, namely, probabilistic programming, Bayesian optimization, data compression and automatic model discovery.The author acknowledges an EPSRC grant EP/I036575/1, the DARPA PPAML programme, a Google Focused Research Award for the Automatic Statistician and support from Microsoft Research.This is the author accepted manuscript. The final version is available from NPG at http://www.nature.com/nature/journal/v521/n7553/full/nature14541.html#abstract

    Probabilistic models for neural populations that naturally capture global coupling and criticality

    Get PDF
    Advances in multi-unit recordings pave the way for statistical modeling of activity patterns in large neural populations. Recent studies have shown that the summed activity of all neurons strongly shapes the population response. A separate recent finding has been that neural populations also exhibit criticality, an anomalously large dynamic range for the probabilities of different population activity patterns. Motivated by these two observations, we introduce a class of probabilistic models which takes into account the prior knowledge that the neural population could be globally coupled and close to critical. These models consist of an energy function which parametrizes interactions between small groups of neurons, and an arbitrary positive, strictly increasing, and twice differentiable function which maps the energy of a population pattern to its probability. We show that: 1) augmenting a pairwise Ising model with a nonlinearity yields an accurate description of the activity of retinal ganglion cells which outperforms previous models based on the summed activity of neurons; 2) prior knowledge that the population is critical translates to prior expectations about the shape of the nonlinearity; 3) the nonlinearity admits an interpretation in terms of a continuous latent variable globally coupling the system whose distribution we can infer from data. Our method is independent of the underlying system’s state space; hence, it can be applied to other systems such as natural scenes or amino acid sequences of proteins which are also known to exhibit criticality

    The statistical physics of discovering exogenous and endogenous factors in a chain of events

    Full text link
    Event occurrence is not only subject to the environmental changes, but is also facilitated by the events that have occurred in a system. Here, we develop a method for estimating such extrinsic and intrinsic factors from a single series of event-occurrence times. The analysis is performed using a model that combines the inhomogeneous Poisson process and the Hawkes process, which represent exogenous fluctuations and endogenous chain-reaction mechanisms, respectively. The model is fit to a given dataset by minimizing the free energy, for which statistical physics and a path-integral method are utilized. Because the process of event occurrence is stochastic, parameter estimation is inevitably accompanied by errors, and it can ultimately occur that exogenous and endogenous factors cannot be captured even with the best estimator. We obtained four regimes categorized according to whether respective factors are detected. By applying the analytical method to real time series of debate in a social-networking service, we have observed that the estimated exogenous and endogenous factors are close to the first comments and the follow-up comments, respectively. This method is general and applicable to a variety of data, and we have provided an application program, by which anyone can analyze any series of event times.Comment: 17 pages, 7 figure

    A new Mathematical Framework to Understand Single Neuron Computations

    Get PDF
    An important feature of the nervous system is its ability to adapt to new stimuli. This adaptation allows for optimal encoding of the incoming information by dynamically changing the coding strategy based upon the incoming inputs to the neuron. At the level of single cells, this widespread phenomena is often referred to as spike-frequency adaptation, since it manifests as a history-dependent modulation of the neurons firing frequency. In this thesis I focus on how a neuron is able to adapt its activity to a specific input as well as on the function of such adaptive mechanisms. To study these adaptive processes different approaches have been used, from empirical observations of neural activities to detailed modeling of single cells. Here, I approach these problems by using simplified threshold models. In particular, I introduced a new generalization of the integrate-and-fire model (GIF) along with a convex fitting method allowing for efficient estimation of model parameters. Despite its relative simplicity I show that this neuron model is able to reproduce neuron behaviors with a high degree of accuracy. Moreover, using this method I was able to show that cortical neurons are equipped with two distinct adaptation mechanisms. First, a spike-triggered current that captures the complex influx of ions generated after the emission of a spike. While the second is a movement of the firing threshold, which possibly reflects the slow inactivation of sodium channels induced by the spiking activity. The precise dynamics of these adaptation processes is cell-type specific, explaining the difference of firing activity reported in different neuron types. Consequently, neuronal types can be classified based on model parameters. In Pyramidal neurons spike-dependent adaptation lasts for seconds and follows a scale-free dynamics, which is optimally tuned to encodes the natural inputs that pyramidal neurons receive in vivo. Finally using an extended version of the GIF model, I show that adaptation is not only a spike-dependent phenomenon, but also acts at the subthreshold level. In Pyramidal neurons the dynamics of the firing threshold is influenced by the subthreshold membrane potential. Spike-dependent and voltage-dependent adaptation interact in an activity-dependent way to ultimately shape the filtering properties of the membrane on the input statistics. Equipped with such a mechanism, Pyramidal neurons behave as integrators at low inputs and as a coincidence detectors at high inputs, maintaining sensitivity to input fluctuations across all regimes

    Event impact analysis for time series

    Get PDF
    Time series arise in a variety of application domains—whenever data points are recorded over time and stored for subsequent analysis. A critical question is whether the occurrence of events like natural disasters, technical faults, or political interventions leads to changes in a time series, for example, temporary deviations from its typical behavior. The vast majority of existing research on this topic focuses on the specific impact of a single event on a time series, while methods to generically capture the impact of a recurring event are scarce. In this thesis, we fill this gap by introducing a novel framework for event impact analysis in the case of randomly recurring events. We develop a statistical perspective on the problem and provide a generic notion of event impacts based on a statistical independence relation. The main problem we address is that of establishing the presence of event impacts in stationary time series using statistical independence tests. Tests for event impacts should be generic, powerful, and computationally efficient. We develop two algorithmic test strategies for event impacts that satisfy these properties. The first is based on coincidences between events and peaks in the time series, while the second is based on multiple marginal associations. We also discuss a selection of follow-up questions, including ways to measure, model and visualize event impacts, and the relationship between event impact analysis and anomaly detection in time series. At last, we provide a first method to study event impacts in nonstationary time series. We evaluate our methodological contributions on several real-world datasets and study their performance within large-scale simulation studies
    corecore