32,016 research outputs found
High Dimensional Semiparametric Scale-Invariant Principal Component Analysis
We propose a new high dimensional semiparametric principal component analysis
(PCA) method, named Copula Component Analysis (COCA). The semiparametric model
assumes that, after unspecified marginally monotone transformations, the
distributions are multivariate Gaussian. COCA improves upon PCA and sparse PCA
in three aspects: (i) It is robust to modeling assumptions; (ii) It is robust
to outliers and data contamination; (iii) It is scale-invariant and yields more
interpretable results. We prove that the COCA estimators obtain fast estimation
rates and are feature selection consistent when the dimension is nearly
exponentially large relative to the sample size. Careful experiments confirm
that COCA outperforms sparse PCA on both synthetic and real-world datasets.Comment: Accepted in IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPMAI
ECA: High Dimensional Elliptical Component Analysis in non-Gaussian Distributions
We present a robust alternative to principal component analysis (PCA) ---
called elliptical component analysis (ECA) --- for analyzing high dimensional,
elliptically distributed data. ECA estimates the eigenspace of the covariance
matrix of the elliptical data. To cope with heavy-tailed elliptical
distributions, a multivariate rank statistic is exploited. At the model-level,
we consider two settings: either that the leading eigenvectors of the
covariance matrix are non-sparse or that they are sparse. Methodologically, we
propose ECA procedures for both non-sparse and sparse settings. Theoretically,
we provide both non-asymptotic and asymptotic analyses quantifying the
theoretical performances of ECA. In the non-sparse setting, we show that ECA's
performance is highly related to the effective rank of the covariance matrix.
In the sparse setting, the results are twofold: (i) We show that the sparse ECA
estimator based on a combinatoric program attains the optimal rate of
convergence; (ii) Based on some recent developments in estimating sparse
leading eigenvectors, we show that a computationally efficient sparse ECA
estimator attains the optimal rate of convergence under a suboptimal scaling.Comment: to appear in JASA (T&M
A Direct Estimation of High Dimensional Stationary Vector Autoregressions
The vector autoregressive (VAR) model is a powerful tool in modeling complex
time series and has been exploited in many fields. However, fitting high
dimensional VAR model poses some unique challenges: On one hand, the
dimensionality, caused by modeling a large number of time series and higher
order autoregressive processes, is usually much higher than the time series
length; On the other hand, the temporal dependence structure in the VAR model
gives rise to extra theoretical challenges. In high dimensions, one popular
approach is to assume the transition matrix is sparse and fit the VAR model
using the "least squares" method with a lasso-type penalty. In this manuscript,
we propose an alternative way in estimating the VAR model. The main idea is,
via exploiting the temporal dependence structure, to formulate the estimating
problem into a linear program. There is instant advantage for the proposed
approach over the lasso-type estimators: The estimation equation can be
decomposed into multiple sub-equations and accordingly can be efficiently
solved in a parallel fashion. In addition, our method brings new theoretical
insights into the VAR model analysis. So far the theoretical results developed
in high dimensions (e.g., Song and Bickel (2011) and Kock and Callot (2012))
mainly pose assumptions on the design matrix of the formulated regression
problems. Such conditions are indirect about the transition matrices and not
transparent. In contrast, our results show that the operator norm of the
transition matrices plays an important role in estimation accuracy. We provide
explicit rates of convergence for both estimation and prediction. In addition,
we provide thorough experiments on both synthetic and real-world equity data to
show that there are empirical advantages of our method over the lasso-type
estimators in both parameter estimation and forecasting.Comment: 36 pages, 3 figur
Sparse Median Graphs Estimation in a High Dimensional Semiparametric Model
In this manuscript a unified framework for conducting inference on complex
aggregated data in high dimensional settings is proposed. The data are assumed
to be a collection of multiple non-Gaussian realizations with underlying
undirected graphical structures. Utilizing the concept of median graphs in
summarizing the commonality across these graphical structures, a novel
semiparametric approach to modeling such complex aggregated data is provided
along with robust estimation of the median graph, which is assumed to be
sparse. The estimator is proved to be consistent in graph recovery and an upper
bound on the rate of convergence is given. Experiments on both synthetic and
real datasets are conducted to illustrate the empirical usefulness of the
proposed models and methods
Challenges of Big Data Analysis
Big Data bring new opportunities to modern society and challenges to data
scientists. On one hand, Big Data hold great promises for discovering subtle
population patterns and heterogeneities that are not possible with small-scale
data. On the other hand, the massive sample size and high dimensionality of Big
Data introduce unique computational and statistical challenges, including
scalability and storage bottleneck, noise accumulation, spurious correlation,
incidental endogeneity, and measurement errors. These challenges are
distinguished and require new computational and statistical paradigm. This
article give overviews on the salient features of Big Data and how these
features impact on paradigm change on statistical and computational methods as
well as computing architectures. We also provide various new perspectives on
the Big Data analysis and computation. In particular, we emphasis on the
viability of the sparsest solution in high-confidence set and point out that
exogeneous assumptions in most statistical methods for Big Data can not be
validated due to incidental endogeneity. They can lead to wrong statistical
inferences and consequently wrong scientific conclusions
Distribution-Free Tests of Independence in High Dimensions
We consider the testing of mutual independence among all entries in a
-dimensional random vector based on independent observations. We study
two families of distribution-free test statistics, which include Kendall's tau
and Spearman's rho as important examples. We show that under the null
hypothesis the test statistics of these two families converge weakly to Gumbel
distributions, and propose tests that control the type I error in the
high-dimensional setting where . We further show that the two tests are
rate-optimal in terms of power against sparse alternatives, and outperform
competitors in simulations, especially when is large.Comment: to appear in Biometrik
- …