193 research outputs found

    Laplace deconvolution and its application to Dynamic Contrast Enhanced imaging

    Full text link
    In the present paper we consider the problem of Laplace deconvolution with noisy discrete observations. The study is motivated by Dynamic Contrast Enhanced imaging using a bolus of contrast agent, a procedure which allows considerable improvement in {evaluating} the quality of a vascular network and its permeability and is widely used in medical assessment of brain flows or cancerous tumors. Although the study is motivated by medical imaging application, we obtain a solution of a general problem of Laplace deconvolution based on noisy data which appears in many different contexts. We propose a new method for Laplace deconvolution which is based on expansions of the convolution kernel, the unknown function and the observed signal over Laguerre functions basis. The expansion results in a small system of linear equations with the matrix of the system being triangular and Toeplitz. The number mm of the terms in the expansion of the estimator is controlled via complexity penalty. The advantage of this methodology is that it leads to very fast computations, does not require exact knowledge of the kernel and produces no boundary effects due to extension at zero and cut-off at TT. The technique leads to an estimator with the risk within a logarithmic factor of mm of the oracle risk under no assumptions on the model and within a constant factor of the oracle risk under mild assumptions. The methodology is illustrated by a finite sample simulation study which includes an example of the kernel obtained in the real life DCE experiments. Simulations confirm that the proposed technique is fast, efficient, accurate, usable from a practical point of view and competitive

    Laplace deconvolution on the basis of time domain data and its application to Dynamic Contrast Enhanced imaging

    Full text link
    In the present paper we consider the problem of Laplace deconvolution with noisy discrete non-equally spaced observations on a finite time interval. We propose a new method for Laplace deconvolution which is based on expansions of the convolution kernel, the unknown function and the observed signal over Laguerre functions basis (which acts as a surrogate eigenfunction basis of the Laplace convolution operator) using regression setting. The expansion results in a small system of linear equations with the matrix of the system being triangular and Toeplitz. Due to this triangular structure, there is a common number mm of terms in the function expansions to control, which is realized via complexity penalty. The advantage of this methodology is that it leads to very fast computations, produces no boundary effects due to extension at zero and cut-off at TT and provides an estimator with the risk within a logarithmic factor of the oracle risk. We emphasize that, in the present paper, we consider the true observational model with possibly nonequispaced observations which are available on a finite interval of length TT which appears in many different contexts, and account for the bias associated with this model (which is not present when TT\rightarrow\infty). The study is motivated by perfusion imaging using a short injection of contrast agent, a procedure which is applied for medical assessment of micro-circulation within tissues such as cancerous tumors. Presence of a tuning parameter aa allows to choose the most advantageous time units, so that both the kernel and the unknown right hand side of the equation are well represented for the deconvolution. The methodology is illustrated by an extensive simulation study and a real data example which confirms that the proposed technique is fast, efficient, accurate, usable from a practical point of view and very competitive.Comment: 36 pages, 9 figures. arXiv admin note: substantial text overlap with arXiv:1207.223

    A simple algorithm for stable order reduction of z-domain Laguerre models

    No full text
    International audienceDiscrete-time Laguerre series are a well known and efficient tool in system identification and modeling. This paper presents a simple solution for stable and accurate order reduction of systems described by a Laguerre model

    Control-Relevant System Identification using Nonlinear Volterra and Volterra-Laguerre Models

    Get PDF
    One of the key impediments to the wide-spread use of nonlinear control in industry is the availability of suitable nonlinear models. Empirical models, which are obtained from only the process input-output data, present a convenient alternative to the more involved fundamental models. An important advantage of the empirical models is that their structure can be chosen so as to facilitate the controller design problem. Many of the widely used empirical model structures are linear, and in some cases this basic model formulation may not be able to adequately capture the nonlinear process dynamics. One of the commonly used nonlinear dynamic empirical model structures is the Volterra model, and this work develops a systematic approach to the identification of third-order Volterra and Volterra-Laguerre models from process input-output data.First, plant-friendly input sequences are designed that exploit the Volterra model structure and use the prediction error variance (PEV) expression as a metric of model fidelity. Second, explicit estimator equations are derived for the linear, nonlinear diagonal, and higher-order sub-diagonal kernels using the tailored input sequences. Improvements in the sequence design are also presented which lead to a significant reduction in the amount of data required for identification. Finally, the third-order off-diagonal kernels are estimated using a cross-correlation approach. As an application of this technique, an isothermal polymerization reactor case study is considered.In order to overcome the noise sensitivity and highly parameterized nature of Volterra models, they are projected onto an orthonormal Laguerre basis. Two important variables that need to be selected for the projection are the Laguerre pole and the number of Laguerre filters. The Akaike Information Criterion (AIC) is used as a criterion to determine projected model quality. AIC includes contributions from both model size and model quality, with the latter characterized by the sum-squared error between the Volterra and the Volterra-Laguerre model outputs. Reduced Volterra-Laguerre models were also identified, and the control-relevance of identified Volterra-Laguerre models was evaluated in closed-loop using the model predictive control framework. Thus, this work presents a complete treatment of the problem of identifying nonlinear control-relevant Volterra and Volterra-Laguerre models from input-output data

    Assessment of spontaneous cardiovascular oscillations in Parkinson's disease

    Get PDF
    Parkinson's disease (PD) has been reported to involve postganglionic sympathetic failure and a wide spectrum of autonomic dysfunctions including cardiovascular, sexual, bladder, gastrointestinal and sudo-motor abnormalities. While these symptoms may have a significant impact on daily activities, as well as quality of life, the evaluation of autonomic nervous system (ANS) dysfunctions relies on a large and expensive battery of autonomic tests only accessible in highly specialized laboratories. In this paper we aim to devise a comprehensive computational assessment of disease-related heartbeat dynamics based on instantaneous, time-varying estimates of spontaneous (resting state) cardiovascular oscillations in PD. To this end, we combine standard ANS-related heart rate variability (HRV) metrics with measures of instantaneous complexity (dominant Lyapunov exponent and entropy) and higher-order statistics (bispectra). Such measures are computed over 600-s recordings acquired at rest in 29 healthy subjects and 30 PD patients. The only significant group-wise differences were found in the variability of the dominant Lyapunov exponent. Also, the best PD vs. healthy controls classification performance (balanced accuracy: 73.47%) was achieved only when retaining the time-varying, non-stationary structure of the dynamical features, whereas classification performance dropped significantly (balanced accuracy: 61.91%) when excluding variability-related features. Additionally, both linear and nonlinear model features correlated with both clinical and neuropsychological assessments of the considered patient population. Our results demonstrate the added value and potential of instantaneous measures of heartbeat dynamics and its variability in characterizing PD-related disabilities in motor and cognitive domains

    Estimation of instantaneous complex dynamics through Lyapunov exponents: a study on heartbeat dynamics

    Get PDF
    Measures of nonlinearity and complexity, and in particular the study of Lyapunov exponents, have been increasingly used to characterize dynamical properties of a wide range of biological nonlinear systems, including cardiovascular control. In this work, we present a novel methodology able to effectively estimate the Lyapunov spectrum of a series of stochastic events in an instantaneous fashion. The paradigm relies on a novel point-process high-order nonlinear model of the event series dynamics. The long-term information is taken into account by expanding the linear, quadratic, and cubic Wiener-Volterra kernels with the orthonormal Laguerre basis functions. Applications to synthetic data such as the H�non map and R�ssler attractor, as well as two experimental heartbeat interval datasets (i.e., healthy subjects undergoing postural changes and patients with severe cardiac heart failure), focus on estimation and tracking of the Instantaneous Dominant Lyapunov Exponent (IDLE). The novel cardiovascular assessment demonstrates that our method is able to effectively and instantaneously track the nonlinear autonomic control dynamics, allowing for complexity variability estimations

    Robust Expansion of Uncertain Volterra Kernels into Orthonormal Series

    Get PDF
    Abstract-This paper is concerned with the computation of uncertainty bounds for the expansion of uncertain Volterra models into an orthonormal basis of functions, such as the Laguerre or Kautz bases. This problem has already been addressed in the context of linear systems by means of an approach in which the uncertainty bounds of the expansion coefficients have been estimated from a structured set of impulse responses describing a linear uncertain process. This approach is extended here towards nonlinear Volterra models through the computation of the uncertainty bounds of the expansion coefficients from a structured set of uncertain Volterra kernels. The proposed formulation assures that the resulting model is able to represent all the original uncertainties with minimum intervals for the expansion coefficients. An example is presented to illustrate the effectiveness of the proposed formulation

    Solution of linear ill-posed problems using overcomplete dictionaries

    Full text link
    In the present paper we consider application of overcomplete dictionaries to solution of general ill-posed linear inverse problems. Construction of an adaptive optimal solution for such problems usually relies either on a singular value decomposition or representation of the solution via an orthonormal basis. The shortcoming of both approaches lies in the fact that, in many situations, neither the eigenbasis of the linear operator nor a standard orthonormal basis constitutes an appropriate collection of functions for sparse representation of the unknown function. In the context of regression problems, there have been an enormous amount of effort to recover an unknown function using an overcomplete dictionary. One of the most popular methods, Lasso, is based on minimizing the empirical likelihood and requires stringent assumptions on the dictionary, the, so called, compatibility conditions. While these conditions may be satisfied for the original dictionary functions, they usually do not hold for their images due to contraction imposed by the linear operator. In what follows, we bypass this difficulty by a novel approach which is based on inverting each of the dictionary functions and matching the resulting expansion to the true function, thus, avoiding unrealistic assumptions on the dictionary and using Lasso in a predictive setting. We examine both the white noise and the observational model formulations and also discuss how exact inverse images of the dictionary functions can be replaced by their approximate counterparts. Furthermore, we show how the suggested methodology can be extended to the problem of estimation of a mixing density in a continuous mixture. For all the situations listed above, we provide the oracle inequalities for the risk in a finite sample setting. Simulation studies confirm good computational properties of the Lasso-based technique

    Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation

    Full text link
    Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis.Comment: 20 pages, to appear in IEEE Trans. on Signal Processin
    corecore