504,033 research outputs found
Refined instrumental variable estimation: maximum likelihood optimization of a unified Box–Jenkins model
For many years, various methods for the identification and estimation of parameters in linear, discretetime
transfer functions have been available and implemented in widely available Toolboxes for MatlabTM.
This paper considers a unified Refined Instrumental Variable (RIV) approach to the estimation of discrete
and continuous-time transfer functions characterized by a unified operator that can be interpreted in
terms of backward shift, derivative or delta operators. The estimation is based on the formulation of a
pseudo-linear regression relationship involving optimal prefilters that is derived from an appropriately
unified Box–Jenkins transfer function model. The paper shows that, contrary to apparently widely held
beliefs, the iterative RIV algorithm provides a reliable solution to the maximum likelihood optimization
equations for this class of Box–Jenkins transfer function models and so its en bloc or recursive parameter estimates are optimal in maximum likelihood, prediction error minimization and instrumental variable
terms
Learning Transfer Operators by Kernel Density Estimation
Inference of transfer operators from data is often formulated as a classical
problem that hinges on the Ulam method. The usual description, which we will
call the Ulam-Galerkin method, is in terms of projection onto basis functions
that are characteristic functions supported over a fine grid of rectangles. In
these terms, the usual Ulam-Galerkin approach can be understood as density
estimation by the histogram method. Here we show that the problem can be recast
in statistical density estimation formalism. This recasting of the classical
problem, is a perspective that allows for an explicit and rigorous analysis of
bias and variance, and therefore toward a discussion of the mean square error.
Keywords: Transfer Operators; Frobenius-Perron operator; probability density
estimation; Ulam-Galerkin method;Kernel Density Estimation
Identifiability and parameter estimation of the single particle lithium-ion battery model
This paper investigates the identifiability and estimation of the parameters
of the single particle model (SPM) for lithium-ion battery simulation.
Identifiability is addressed both in principle and in practice. The approach
begins by grouping parameters and partially non-dimensionalising the SPM to
determine the maximum expected degrees of freedom in the problem. We discover
that, excluding open circuit voltage, there are only six independent
parameters. We then examine the structural identifiability by considering
whether the transfer function of the linearised SPM is unique. It is found that
the model is unique provided that the electrode open circuit voltage functions
have a known non-zero gradient, the parameters are ordered, and the electrode
kinetics are lumped into a single charge transfer resistance parameter. We then
demonstrate the practical estimation of model parameters from measured
frequency-domain experimental electrochemical impedance spectroscopy (EIS)
data, and show additionally that the parametrised model provides good
predictive capabilities in the time domain, exhibiting a maximum voltage error
of 20 mV between model and experiment over a 10 minute dynamic discharge.Comment: 16 pages, 9 figures, pre-print submitted to the IEEE Transactions on
Control Systems Technolog
Deconvolution from Wavefront Sensing Using Optimal Wavefront Estimators
A cost effective method to improve the space surveillance mission performance of United States Air Force (USAF) ground-based telescopes is investigated and improved. A minimum variance wavefront estimation technique is used to improve Deconvolution from Wavefront Sensing (DWFS), a method to mitigate the effects of atmospheric turbulence on imaging systems that does not require expensive adaptive optics. Both least-squares and minimum variance wavefront phase estimation techniques are investigated, using both Gaussian and Zernike polynomial elementary functions. Imaging simulations and established performance metrics are used to evaluate these wavefront estimation techniques for a one-meter optical telescope. Performance metrics include the average pupil-averaged mean square phase error of the residual wavefront, the average system transfer function, the signal-to-noise ratio (SNR) of the system transfer function, and the optical transfer function correlation. Results show that the minimum variance estimation technique that employs Zernike polynomial elementary functions offers improvements over all other estimation techniques in each of the performance metrics. Extended object simulations are also conducted which demonstrate the improvements in image quality and resolution that result from the modifications to the DWFS method. Implementation of the DWFS method into USAF space surveillance telescopes is investigated
A study of parameter identification
A set of definitions for deterministic parameter identification ability were proposed. Deterministic parameter identificability properties are presented based on four system characteristics: direct parameter recoverability, properties of the system transfer function, properties of output distinguishability, and uniqueness properties of a quadratic cost functional. Stochastic parameter identifiability was defined in terms of the existence of an estimation sequence for the unknown parameters which is consistent in probability. Stochastic parameter identifiability properties are presented based on the following characteristics: convergence properties of the maximum likelihood estimate, properties of the joint probability density functions of the observations, and properties of the information matrix
Efficient transfer entropy analysis of non-stationary neural time series
Information theory allows us to investigate information processing in neural
systems in terms of information transfer, storage and modification. Especially
the measure of information transfer, transfer entropy, has seen a dramatic
surge of interest in neuroscience. Estimating transfer entropy from two
processes requires the observation of multiple realizations of these processes
to estimate associated probability density functions. To obtain these
observations, available estimators assume stationarity of processes to allow
pooling of observations over time. This assumption however, is a major obstacle
to the application of these estimators in neuroscience as observed processes
are often non-stationary. As a solution, Gomez-Herrero and colleagues
theoretically showed that the stationarity assumption may be avoided by
estimating transfer entropy from an ensemble of realizations. Such an ensemble
is often readily available in neuroscience experiments in the form of
experimental trials. Thus, in this work we combine the ensemble method with a
recently proposed transfer entropy estimator to make transfer entropy
estimation applicable to non-stationary time series. We present an efficient
implementation of the approach that deals with the increased computational
demand of the ensemble method's practical application. In particular, we use a
massively parallel implementation for a graphics processing unit to handle the
computationally most heavy aspects of the ensemble method. We test the
performance and robustness of our implementation on data from simulated
stochastic processes and demonstrate the method's applicability to
magnetoencephalographic data. While we mainly evaluate the proposed method for
neuroscientific data, we expect it to be applicable in a variety of fields that
are concerned with the analysis of information transfer in complex biological,
social, and artificial systems.Comment: 27 pages, 7 figures, submitted to PLOS ON
- …