18,107 research outputs found

    Sparse Volterra and Polynomial Regression Models: Recoverability and Estimation

    Full text link
    Volterra and polynomial regression models play a major role in nonlinear system identification and inference tasks. Exciting applications ranging from neuroscience to genome-wide association analysis build on these models with the additional requirement of parsimony. This requirement has high interpretative value, but unfortunately cannot be met by least-squares based or kernel regression methods. To this end, compressed sampling (CS) approaches, already successful in linear regression settings, can offer a viable alternative. The viability of CS for sparse Volterra and polynomial models is the core theme of this work. A common sparse regression task is initially posed for the two models. Building on (weighted) Lasso-based schemes, an adaptive RLS-type algorithm is developed for sparse polynomial regressions. The identifiability of polynomial models is critically challenged by dimensionality. However, following the CS principle, when these models are sparse, they could be recovered by far fewer measurements. To quantify the sufficient number of measurements for a given level of sparsity, restricted isometry properties (RIP) are investigated in commonly met polynomial regression settings, generalizing known results for their linear counterparts. The merits of the novel (weighted) adaptive CS algorithms to sparse polynomial modeling are verified through synthetic as well as real data tests for genotype-phenotype analysis.Comment: 20 pages, to appear in IEEE Trans. on Signal Processin

    An alternative solution to the model structure selection problem

    Get PDF
    An alternative solution to the model structure selection problem is introduced by conducting a forward search through the many possible candidate model terms initially and then performing an exhaustive all subset model selection on the resulting model. An example is included to demonstrate that this approach leads to dynamically valid nonlinear model

    Least absolute deviation estimation of linear econometric models: A literature review

    Get PDF
    Econometricians generally take for granted that the error terms in the econometric models are generated by distributions having a finite variance. However, since the time of Pareto the existence of error distributions with infinite variance is known. Works of many econometricians, namely, Meyer & Glauber (1964), Fama (1965) and Mandlebroth (1967), on economic data series like prices in financial and commodity markets confirm that infinite variance distributions exist abundantly. The distribution of firms by size, behaviour of speculative prices and various other recent economic phenomena also display similar trends. Further, econometricians generally assume that the disturbance term, which is an influence of innumerably many factors not accounted for in the model, approaches normality according to the Central Limit Theorem. But Bartels (1977) is of the opinion that there are limit theorems, which are just likely to be relevant when considering the sum of number of components in a regression disturbance that leads to non-normal stable distribution characterized by infinite variance. Thus, the possibility of the error term following a non-normal distribution exists. The Least Squares method of estimation of parameters of linear (regression) models performs well provided that the residuals (disturbances or errors) are well behaved (preferably normally or near-normally distributed and not infested with large size outliers) and follow Gauss-Markov assumptions. However, models with the disturbances that are prominently non-normally distributed and contain sizeable outliers fail estimation by the Least Squares method. An intensive research has established that in such cases estimation by the Least Absolute Deviation (LAD) method performs well. This paper is an attempt to survey the literature on LAD estimation of single as well as multi-equation linear econometric models.Lad estimator; Least absolute deviation estimation; econometric model; LAD Estimator; Minimum Absolute Deviation; Robust; Outliers; L1 Estimator; Review of literature

    Regression on manifolds: Estimation of the exterior derivative

    Full text link
    Collinearity and near-collinearity of predictors cause difficulties when doing regression. In these cases, variable selection becomes untenable because of mathematical issues concerning the existence and numerical stability of the regression coefficients, and interpretation of the coefficients is ambiguous because gradients are not defined. Using a differential geometric interpretation, in which the regression coefficients are interpreted as estimates of the exterior derivative of a function, we develop a new method to do regression in the presence of collinearities. Our regularization scheme can improve estimation error, and it can be easily modified to include lasso-type regularization. These estimators also have simple extensions to the "large pp, small nn" context.Comment: Published in at http://dx.doi.org/10.1214/10-AOS823 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Improved Inference for the Instrumental Variables Estimator

    Get PDF
    It is now well known that standard asymptotic inference techniques for instrumental variable estimation perform very poorly in the presence of weak instruments. Specifically, standard asymptotic techniques give spuriously small standard errors, leading investigators to accept apparently tight confidence regions which unfortunately may be very far from the true parameter of interest. We present an improved technique for inference on structural parameters based on reduced form estimates. The `S-statistic' produces confidence regions based on a joint test of the structural hypothesis and the identification condition. The S-statistic converges to the standard asymptotic Wald statistic as identification becomes certain, has much better size properties when the instruments are weak, and may be inverted in closed form to conveniently compute confidence regions. In addition to providing improved inference for instrumental variable estimation, the technique suggested here may be useful in other applications where weak identification is important.

    Improved Inference for the Instrumental Variable Estimator

    Get PDF
    It is now well known that standard asymptotic inference techniques for instrumental variable estimation perform very poorly in the presence of weak instruments. Specifically, standard asymptotic techniques give spuriously small standard errors, leading investigators to accept apparently tight confidence regions which unfortunately may be very far from the true parameter of interest. We present an improved technique for inference on structural parameters based on reduced form estimates. The "S-statistic" produces confidence regions based on a joint test of the structural hypothesis and the identification condition. The S- statistic converges to the standard asymptotic Wald statistic as identification becomes certain, has much better size properties when the instruments are weak, and may be inverted in closed form to conveniently compute confidence regions. In addition to providing improved inference for instrumental variable estimation, the technique suggested here may be useful in other applications where weak identification is important.instrumental variables, Wald test, weak instruments
    • …
    corecore