31 research outputs found

    Functional MRI data analysis : Detection, estimation and modelling

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    A new class of wavelet networks for nonlinear system identification

    Get PDF
    A new class of wavelet networks (WNs) is proposed for nonlinear system identification. In the new networks, the model structure for a high-dimensional system is chosen to be a superimposition of a number of functions with fewer variables. By expanding each function using truncated wavelet decompositions, the multivariate nonlinear networks can be converted into linear-in-the-parameter regressions, which can be solved using least-squares type methods. An efficient model term selection approach based upon a forward orthogonal least squares (OLS) algorithm and the error reduction ratio (ERR) is applied to solve the linear-in-the-parameters problem in the present study. The main advantage of the new WN is that it exploits the attractive features of multiscale wavelet decompositions and the capability of traditional neural networks. By adopting the analysis of variance (ANOVA) expansion, WNs can now handle nonlinear identification problems in high dimensions

    Neural activity inspired asymmetric basis function TV-NARX model for the identification of time-varying dynamic systems

    Get PDF
    Inspired by the unique neuronal activities, a new time-varying nonlinear autoregressive with exogenous input (TV-NARX) model is proposed for modelling nonstationary processes. The NARX nonlinear process mimics the action potential initiation and the time-varying parameters are approximated with a series of postsynaptic current like asymmetric basis functions to mimic the ion channels of the inter-neuron propagation. In the model, the time-varying parameters of the process terms are sparsely represented as the superposition of a series of asymmetric alpha basis functions in an over-complete frame. Combining the alpha basis functions with the model process terms, the system identification of the TV-NARX model from observed input and output can equivalently be treated as the system identification of a corresponding time-invariant system. The locally regularised orthogonal forward regression (LROFR) algorithm is then employed to detect the sparse model structure and estimate the associated coefficients. The excellent performance in both numerical studies and modelling of real physiological signals showed that the TV-NARX model with asymmetric basis function is more powerful and efficient in tracking both smooth trends and capturing the abrupt changes in the time-varying parameters than its symmetric counterparts

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Selective attention and speech processing in the cortex

    Full text link
    In noisy and complex environments, human listeners must segregate the mixture of sound sources arriving at their ears and selectively attend a single source, thereby solving a computationally difficult problem called the cocktail party problem. However, the neural mechanisms underlying these computations are still largely a mystery. Oscillatory synchronization of neuronal activity between cortical areas is thought to provide a crucial role in facilitating information transmission between spatially separated populations of neurons, enabling the formation of functional networks. In this thesis, we seek to analyze and model the functional neuronal networks underlying attention to speech stimuli and find that the Frontal Eye Fields play a central 'hub' role in the auditory spatial attention network in a cocktail party experiment. We use magnetoencephalography (MEG) to measure neural signals with high temporal precision, while sampling from the whole cortex. However, several methodological issues arise when undertaking functional connectivity analysis with MEG data. Specifically, volume conduction of electrical and magnetic fields in the brain complicates interpretation of results. We compare several approaches through simulations, and analyze the trade-offs among various measures of neural phase-locking in the presence of volume conduction. We use these insights to study functional networks in a cocktail party experiment. We then construct a linear dynamical system model of neural responses to ongoing speech. Using this model, we are able to correctly predict which of two speakers is being attended by a listener. We then apply this model to data from a task where people were attending to stories with synchronous and scrambled videos of the speakers' faces to explore how the presence of visual information modifies the underlying neuronal mechanisms of speech perception. This model allows us to probe neural processes as subjects listen to long stimuli, without the need for a trial-based experimental design. We model the neural activity with latent states, and model the neural noise spectrum and functional connectivity with multivariate autoregressive dynamics, along with impulse responses for external stimulus processing. We also develop a new regularized Expectation-Maximization (EM) algorithm to fit this model to electroencephalography (EEG) data

    Leaning Robust Sequence Features via Dynamic Temporal Pattern Discovery

    Get PDF
    As a major type of data, time series possess invaluable latent knowledge for describing the real world and human society. In order to improve the ability of intelligent systems for understanding the world and people, it is critical to design sophisticated machine learning algorithms for extracting robust time series features from such latent knowledge. Motivated by the successful applications of deep learning in computer vision, more and more machine learning researchers put their attentions on the topic of applying deep learning techniques to time series data. However, directly employing current deep models in most time series domains could be problematic. A major reason is that temporal pattern types that current deep models are aiming at are very limited, which cannot meet the requirement of modeling different underlying patterns of data coming from various sources. In this study we address this problem by designing different network structures explicitly based on specific domain knowledge such that we can extract features via most salient temporal patterns. More specifically, we mainly focus on two types of temporal patterns: order patterns and frequency patterns. For order patterns, which are usually related to brain and human activities, we design a hashing-based neural network layer to globally encode the ordinal pattern information into the resultant features. It is further generalized into a specially designed Recurrent Neural Networks (RNN) cell which can learn order patterns in an online fashion. On the other hand, we believe audio-related data such as music and speech can benefit from modeling frequency patterns. Thus, we do so by developing two types of RNN cells. The first type tries to directly learn the long-term dependencies on frequency domain rather than time domain. The second one aims to dynamically filter out the noise frequencies based on temporal contexts. By proposing various deep models based on different domain knowledge and evaluating them on extensive time series tasks, we hope this work can provide inspirations for others and increase the community\u27s interests on the problem of applying deep learning techniques to more time series tasks

    Closed-Loop Brain-Computer Interfaces for Memory Restoration Using Deep Brain Stimulation

    Get PDF
    The past two decades have witnessed the rapid growth of therapeutic brain-computer interfaces (BCI) targeting a diversity of brain dysfunctions. Among many neurosurgical procedures, deep brain stimulation (DBS) with neuromodulation technique has emerged as a fruitful treatment for neurodegenerative disorders such as epilepsy, Parkinson\u27s disease, post-traumatic amnesia, and Alzheimer\u27s disease, as well as neuropsychiatric disorders such as depression, obsessive-compulsive disorder, and schizophrenia. In parallel to the open-loop neuromodulation strategies for neuromotor disorders, recent investigations have demonstrated the superior performance of closed-loop neuromodulation systems for memory-relevant disorders due to the more sophisticated underlying brain circuitry during cognitive processes. Our efforts are focused on discovering unique neurophysiological patterns associated with episodic memories then applying control theoretical principles to achieve closed-loop neuromodulation of such memory-relevant oscillatory activity, especially, theta and gamma oscillations. First, we use a unique dataset with intracranial electrodes inserted simultaneously into the hippocampus and seven cortical regions across 40 human subjects to test for the presence of a pattern that the phase of hippocampal theta oscillation modulates gamma oscillations in the cortex, termed cross-regional phase-amplitude coupling (xPAC), representing a key neurophysiological mechanism that promotes the temporal organization of interregional oscillatory activities, which has not previously been observed in human subjects. We then establish that the magnitude of xPAC predicts memory encoding success along with other properties of xPAC. We find that strong functional xPAC occurs principally between the hippocampus and other mesial temporal structures, namely entorhinal and parahippocampal cortices, and that xPAC is overall stronger for posterior hippocampal connections. Next, we focus on hippocampal gamma power as a `biomarker\u27 and use a novel dataset in which open-loop DBS was applied to the posterior cingulate cortex (PCC) during the encoding of episodic memories. We evaluate the feasibility of modulating hippocampal power by a precise control of stimulation via a linear quadratic integral (LQI) controller based on autoregressive with exogenous input (ARX) modeling for in-vivo use. In the simulation framework, we demonstrate proposed BCI system achieves effective control of hippocampal gamma power in 15 out of 17 human subjects and we show our DBS pattern is physiologically safe with realistic time scales. Last, we further develop the PCC-applied binary-noise (BN) DBS paradigm targeting the neuromodulation of both hippocampal theta and gamma oscillatory power in 12 human subjects. We utilize a novel nonlinear autoregressive with exogenous input neural network (NARXNN) as the plant paired with a proportional–integral–derivative (PID) controller (NARXNN-PID) for delivering a precise stimulation pattern to achieve desired oscillatory power level. Compared to a benchmark consisted of a linear state-space model (LSSM) with a PID controller, we not only demonstrate that the superior performance of our NARXNN plant model but also show the greater capacity of NARXNN-PID architecture in controlling both hippocampal theta and gamma power. We outline further experimentation to test our BCI system and compare our findings to emerging closed-loop neuromodulation strategies

    Review of rational (total) nonlinear dynamic system modelling, identification, and control

    Get PDF
    © 2013 Taylor & Francis. This paper is a summary of the research development in the rational (total) nonlinear dynamic modelling over the last two decades. Total nonlinear dynamic systems are defined as those where the model parameters and input (controller outputs) are subject to nonlinear to the output. Previously, this class of models has been known as rational models, which is a model that can be considered to belong to the nonlinear autoregressive moving average with exogenous input (NARMAX) model subset and is an extension of the well-known polynomial NARMAX model. The justification for using the rational model is that it provides a very concise and parsimonious representation for highly complex nonlinear dynamic systems and has excellent interpolatory and extrapolatory properties. However, model identification and controller design are much more challenging compared to the polynomial models. This has been a new and fascinating research trend in the area of mathematical modelling, control, and applications, but still within a limited research community. This paper brings several representative algorithms together, developed by the authors and their colleagues, to form an easily referenced archive for promotion of the awareness, tutorial, applications, and even further research expansion

    Time-varying nonlinear causality detection using regularized orthogonal least squares and multi-wavelets with applications to EEG

    Get PDF
    A new transient Granger causality detection method is proposed based on a time-varying parametric modelling framework, and is applied to real EEG signals to reveal the causal information flow during motor imagery (MI) tasks. The time-varying parametric modelling approach employs a nonlinear autoregressive with external input (NARX) model, whose parameters are approximated by a set of multiwavelet basis functions. A regularized orthogonal least squares (ROLS) algorithm is then used to produce a parsimonious or sparse regression model and estimate the associated model parameters. The time-varying Granger causality between nonstationary signals can be detected accurately by making use of both the good approximation properties of multi-wavelets and the good generalization performance of the ROLS in the presence of high-level noise. Two simulation examples are presented to demonstrate the effectiveness of the proposed method for both linear and nonlinear causal detection respectively. The proposed method is then applied to real EEG signals of MI tasks. It follows that transient causal information flow over the time course between various sensorimotor related channels can be successfully revealed during the whole reaction processes. Experiment results from these case studies confirm the applicability of the proposed scheme and show its utility for the understanding of the associated neural mechanism and the potential significance for developing MI tasks based brain-computer interface (BCI) systems
    corecore