30,911 research outputs found
Bibliographic Review on Distributed Kalman Filtering
In recent years, a compelling need has arisen to understand the effects of distributed information structures on estimation and filtering. In this paper, a bibliographical review on distributed Kalman filtering (DKF) is provided.\ud
The paper contains a classification of different approaches and methods involved to DKF. The applications of DKF are also discussed and explained separately. A comparison of different approaches is briefly carried out. Focuses on the contemporary research are also addressed with emphasis on the practical applications of the techniques. An exhaustive list of publications, linked directly or indirectly to DKF in the open literature, is compiled to provide an overall picture of different developing aspects of this area
Functional Regression
Functional data analysis (FDA) involves the analysis of data whose ideal
units of observation are functions defined on some continuous domain, and the
observed data consist of a sample of functions taken from some population,
sampled on a discrete grid. Ramsay and Silverman's 1997 textbook sparked the
development of this field, which has accelerated in the past 10 years to become
one of the fastest growing areas of statistics, fueled by the growing number of
applications yielding this type of data. One unique characteristic of FDA is
the need to combine information both across and within functions, which Ramsay
and Silverman called replication and regularization, respectively. This article
will focus on functional regression, the area of FDA that has received the most
attention in applications and methodological development. First will be an
introduction to basis functions, key building blocks for regularization in
functional regression methods, followed by an overview of functional regression
methods, split into three types: [1] functional predictor regression
(scalar-on-function), [2] functional response regression (function-on-scalar)
and [3] function-on-function regression. For each, the role of replication and
regularization will be discussed and the methodological development described
in a roughly chronological manner, at times deviating from the historical
timeline to group together similar methods. The primary focus is on modeling
and methodology, highlighting the modeling structures that have been developed
and the various regularization approaches employed. At the end is a brief
discussion describing potential areas of future development in this field
An informational approach to the global optimization of expensive-to-evaluate functions
In many global optimization problems motivated by engineering applications,
the number of function evaluations is severely limited by time or cost. To
ensure that each evaluation contributes to the localization of good candidates
for the role of global minimizer, a sequential choice of evaluation points is
usually carried out. In particular, when Kriging is used to interpolate past
evaluations, the uncertainty associated with the lack of information on the
function can be expressed and used to compute a number of criteria accounting
for the interest of an additional evaluation at any given point. This paper
introduces minimizer entropy as a new Kriging-based criterion for the
sequential choice of points at which the function should be evaluated. Based on
\emph{stepwise uncertainty reduction}, it accounts for the informational gain
on the minimizer expected from a new evaluation. The criterion is approximated
using conditional simulations of the Gaussian process model behind Kriging, and
then inserted into an algorithm similar in spirit to the \emph{Efficient Global
Optimization} (EGO) algorithm. An empirical comparison is carried out between
our criterion and \emph{expected improvement}, one of the reference criteria in
the literature. Experimental results indicate major evaluation savings over
EGO. Finally, the method, which we call IAGO (for Informational Approach to
Global Optimization) is extended to robust optimization problems, where both
the factors to be tuned and the function evaluations are corrupted by noise.Comment: Accepted for publication in the Journal of Global Optimization (This
is the revised version, with additional details on computational problems,
and some grammatical changes
Optimal control of partially observable linear quadratic systems with asymmetric observation errors
This paper deals with the optimal quadratic control problem for non-Gaussian discrete-time stochastic systems. Our main result gives explicit solutions for the optimal quadratic control problem for partially observable dynamic linear systems with asymmetric observation errors. For this purpose an asymmetric version of the Kalman filter based on asymmetric least squares estimation is used. We illustrate the applicability of our approach with numerical results
Optimal LQG Control Across a Packet-Dropping Link
We examine optimal Linear Quadratic Gaussian control for a system in which communication between the sensor (output of the plant) and the controller occurs across a packet-dropping link. We extend the familiar LQG separation principle to this problem that allows us to solve this problem using a standard LQR state-feedback design, along with an optimal algorithm for propagating and using the information across the unreliable link. We present one such optimal algorithm, which consists of a Kalman Filter at the sensor side of the link, and a switched linear filter at the controller side. Our design does not assume any statistical model of the packet drop events, and is thus optimal for an arbitrary packet drop pattern. Further, the solution is appealing from a practical point of view because it can be implemented as a small modification of an existing LQG control design
Multivariate Covariance Generalized Linear Models
We propose a general framework for non-normal multivariate data analysis
called multivariate covariance generalized linear models (McGLMs), designed to
handle multivariate response variables, along with a wide range of temporal and
spatial correlation structures defined in terms of a covariance link function
combined with a matrix linear predictor involving known matrices. The method is
motivated by three data examples that are not easily handled by existing
methods. The first example concerns multivariate count data, the second
involves response variables of mixed types, combined with repeated measures and
longitudinal structures, and the third involves a spatio-temporal analysis of
rainfall data. The models take non-normality into account in the conventional
way by means of a variance function, and the mean structure is modelled by
means of a link function and a linear predictor. The models are fitted using an
efficient Newton scoring algorithm based on quasi-likelihood and Pearson
estimating functions, using only second-moment assumptions. This provides a
unified approach to a wide variety of different types of response variables and
covariance structures, including multivariate extensions of repeated measures,
time series, longitudinal, spatial and spatio-temporal structures.Comment: 21 pages, 5 figure
The generation of dual wavelength pulse fiber laser using fiber bragg grating
A stable simple generation of dual wavelength pulse fiber laser on experimental method is proposed and demonstrated by using Figure eight circuit diagram. The generation of dual wavelength pulse fiber laser was proposed using fiber Bragg gratings (FBGs) with two different central wavelengths which are 1550 nm and 1560 nm. At 600 mA (27.78 dBm) of laser diode, the stability of dual wavelength pulse fiber laser appears on 1550 nm and 1560 nm with the respective peak powers of -54.03 dBm and -58.00 dBm. The wavelength spacing of the spectrum is about 10 nm while the signal noise to ratio (SNR) for both peaks are about 8.23 dBm and 9.67 dBm. In addition, the repetition rate is 2.878 MHz with corresponding pulse spacing of about 0.5 μs, is recorded
Models for Paired Comparison Data: A Review with Emphasis on Dependent Data
Thurstonian and Bradley-Terry models are the most commonly applied models in
the analysis of paired comparison data. Since their introduction, numerous
developments have been proposed in different areas. This paper provides an
updated overview of these extensions, including how to account for object- and
subject-specific covariates and how to deal with ordinal paired comparison
data. Special emphasis is given to models for dependent comparisons. Although
these models are more realistic, their use is complicated by numerical
difficulties. We therefore concentrate on implementation issues. In particular,
a pairwise likelihood approach is explored for models for dependent paired
comparison data, and a simulation study is carried out to compare the
performance of maximum pairwise likelihood with other limited information
estimation methods. The methodology is illustrated throughout using a real data
set about university paired comparisons performed by students.Comment: Published in at http://dx.doi.org/10.1214/12-STS396 the Statistical
Science (http://www.imstat.org/sts/) by the Institute of Mathematical
Statistics (http://www.imstat.org
- …