829,310 research outputs found
The theory of linear prediction
Linear prediction theory has had a profound impact in the field of digital signal processing. Although the theory dates back to the early 1940s, its influence can still be seen in applications today. The theory is based on very elegant mathematics and leads to many beautiful insights into statistical signal processing. Although prediction is only a part of the more general topics of linear estimation, filtering, and smoothing, this book focuses on linear prediction. This has enabled detailed discussion of a number of issues that are normally not found in texts. For example, the theory of vector linear prediction is explained in considerable detail and so is the theory of line spectral processes. This focus and its small size make the book different from many excellent texts which cover the topic, including a few that are actually dedicated to linear prediction. There are several examples and computer-based demonstrations of the theory. Applications are mentioned wherever appropriate, but the focus is not on the detailed development of these applications.
The writing style is meant to be suitable for self-study as well as for classroom use at the senior and first-year graduate levels. The text is self-contained for readers with introductory exposure to signal processing, random processes, and the theory of matrices, and a historical perspective and detailed outline are given in the first chapter
Simultaneous Prediction of Actual and Average Values of Study Variable Using Stein-rule Estimators
The simultaneous prediction of average and actual values of study variable in a linear regression model is considered in this paper. Generally, either of the ordinary least squares estimator or Stein-rule estimators are employed for the construction of predictors for the simultaneous prediction. A linear combination of ordinary least squares and Stein-rule predictors are utilized in this paper to construct an improved predictors. Their efficiency properties are derived using the small disturbance asymptotic theory and dominance conditions for the superiority of predictors over each other are analyzed
Recommended from our members
Estimating Residual Faults from Code Coverage
Many reliability prediction techniques require an estimate for the number of residual faults. In this paper, a new theory is developed for using test coverage to estimate the number of residual faults. This theory is applied to a specific example with known faults and the results agree well with the theory. The theory is used to justify the use of linear extrapolation to estimate residual faults. It is also shown that it is important to establish the amount of unreachable code in order to make a realistic residual fault estimate
Applications of a finite-dimensional duality principle to some prediction problems
Some of the most important results in prediction theory and time series
analysis when finitely many values are removed from or added to its infinite
past have been obtained using difficult and diverse techniques ranging from
duality in Hilbert spaces of analytic functions (Nakazi, 1984) to linear
regression in statistics (Box and Tiao, 1975). We unify these results via a
finite-dimensional duality lemma and elementary ideas from the linear algebra.
The approach reveals the inherent finite-dimensional character of many
difficult prediction problems, the role of duality and biorthogonality for a
finite set of random variables. The lemma is particularly useful when the
number of missing values is small, like one or two, as in the case of
Kolmogorov and Nakazi prediction problems. The stationarity of the underlying
process is not a requirement. It opens up the possibility of extending such
results to nonstationary processes.Comment: 15 page
On-line predictive linear regression
We consider the on-line predictive version of the standard problem of linear
regression; the goal is to predict each consecutive response given the
corresponding explanatory variables and all the previous observations. We are
mainly interested in prediction intervals rather than point predictions. The
standard treatment of prediction intervals in linear regression analysis has
two drawbacks: (1) the classical prediction intervals guarantee that the
probability of error is equal to the nominal significance level epsilon, but
this property per se does not imply that the long-run frequency of error is
close to epsilon; (2) it is not suitable for prediction of complex systems as
it assumes that the number of observations exceeds the number of parameters. We
state a general result showing that in the on-line protocol the frequency of
error for the classical prediction intervals does equal the nominal
significance level, up to statistical fluctuations. We also describe
alternative regression models in which informative prediction intervals can be
found before the number of observations exceeds the number of parameters. One
of these models, which only assumes that the observations are independent and
identically distributed, is popular in machine learning but greatly underused
in the statistical theory of regression.Comment: 34 pages; 6 figures; 1 table. arXiv admin note: substantial text
overlap with arXiv:0906.312
- …