1,554 research outputs found
Nonparametric estimation of varying coefficient dynamic panel models
This is the publisher's version, also available electronically from http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=2059888&fileId=S0266466608080523.We suggest using a class of semiparametric dynamic panel data models to capture individual variations in panel data. The model assumes linearity in some continuous/discrete variables that can be exogenous/endogenous and allows for nonlinearity in other weakly exogenous variables. We propose a nonparametric generalized method of moments (NPGMM) procedure to estimate the functional coefficients, and we establish the consistency and asymptotic normality of the resulting estimators
Nonparametric Estimation Of Varying Coefficient Dynamic Panel Data Models
We suggest using a class of semiparametric dynamic panel data models to capture individual variations in panel data. The model assumes linearity in some continu ous/discrete variables which can be exogenous/endogenous, and allows for nonlinearity in other weakly exogenous variables. We propose a nonparametric generalized method of moments (NPGMM) procedure to estimate the functional coefficients, and we establish the consistency and asymptotic normality of the resulting estimators
Nonparametric Estimation Of Varying Coefficient Dynamic Panel Data Models
We suggest using a class of semiparametric dynamic panel data models to capture individual variations in panel data. The model assumes linearity in some continuous/discrete variables that can be exogenous/endogenous and allows for nonlinearity in other weakly exogenous variables. We propose a nonparametric generalized method of moments (NPGMM) procedure to estimate the functional coefficients, and we establish the consistency and asymptotic normality of the resulting estimators. Econometric Theory, 24, 2008, 1321–1342+ Printed in the United States of America+ doi:10+10170S0266466608080523
Wavelet methods in statistics: Some recent developments and their applications
The development of wavelet theory has in recent years spawned applications in
signal processing, in fast algorithms for integral transforms, and in image and
function representation methods. This last application has stimulated interest
in wavelet applications to statistics and to the analysis of experimental data,
with many successes in the efficient analysis, processing, and compression of
noisy signals and images. This is a selective review article that attempts to
synthesize some recent work on ``nonlinear'' wavelet methods in nonparametric
curve estimation and their role on a variety of applications. After a short
introduction to wavelet theory, we discuss in detail several wavelet shrinkage
and wavelet thresholding estimators, scattered in the literature and developed,
under more or less standard settings, for density estimation from i.i.d.
observations or to denoise data modeled as observations of a signal with
additive noise. Most of these methods are fitted into the general concept of
regularization with appropriately chosen penalty functions. A narrow range of
applications in major areas of statistics is also discussed such as partial
linear regression models and functional index models. The usefulness of all
these methods are illustrated by means of simulations and practical examples.Comment: Published in at http://dx.doi.org/10.1214/07-SS014 the Statistics
Surveys (http://www.i-journals.org/ss/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Tensor Networks for Dimensionality Reduction and Large-Scale Optimizations. Part 2 Applications and Future Perspectives
Part 2 of this monograph builds on the introduction to tensor networks and
their operations presented in Part 1. It focuses on tensor network models for
super-compressed higher-order representation of data/parameters and related
cost functions, while providing an outline of their applications in machine
learning and data analytics. A particular emphasis is on the tensor train (TT)
and Hierarchical Tucker (HT) decompositions, and their physically meaningful
interpretations which reflect the scalability of the tensor network approach.
Through a graphical approach, we also elucidate how, by virtue of the
underlying low-rank tensor approximations and sophisticated contractions of
core tensors, tensor networks have the ability to perform distributed
computations on otherwise prohibitively large volumes of data/parameters,
thereby alleviating or even eliminating the curse of dimensionality. The
usefulness of this concept is illustrated over a number of applied areas,
including generalized regression and classification (support tensor machines,
canonical correlation analysis, higher order partial least squares),
generalized eigenvalue decomposition, Riemannian optimization, and in the
optimization of deep neural networks. Part 1 and Part 2 of this work can be
used either as stand-alone separate texts, or indeed as a conjoint
comprehensive review of the exciting field of low-rank tensor networks and
tensor decompositions.Comment: 232 page
Program Evaluation and Causal Inference with High-Dimensional Data
In this paper, we provide efficient estimators and honest confidence bands
for a variety of treatment effects including local average (LATE) and local
quantile treatment effects (LQTE) in data-rich environments. We can handle very
many control variables, endogenous receipt of treatment, heterogeneous
treatment effects, and function-valued outcomes. Our framework covers the
special case of exogenous receipt of treatment, either conditional on controls
or unconditionally as in randomized control trials. In the latter case, our
approach produces efficient estimators and honest bands for (functional)
average treatment effects (ATE) and quantile treatment effects (QTE). To make
informative inference possible, we assume that key reduced form predictive
relationships are approximately sparse. This assumption allows the use of
regularization and selection methods to estimate those relations, and we
provide methods for post-regularization and post-selection inference that are
uniformly valid (honest) across a wide-range of models. We show that a key
ingredient enabling honest inference is the use of orthogonal or doubly robust
moment conditions in estimating certain reduced form functional parameters. We
illustrate the use of the proposed methods with an application to estimating
the effect of 401(k) eligibility and participation on accumulated assets.Comment: 118 pages, 3 tables, 11 figures, includes supplementary appendix.
This version corrects some typos in Example 2 of the published versio
- …