28,727 research outputs found
Foundational principles for large scale inference: Illustrations through correlation mining
When can reliable inference be drawn in the "Big Data" context? This paper
presents a framework for answering this fundamental question in the context of
correlation mining, with implications for general large scale inference. In
large scale data applications like genomics, connectomics, and eco-informatics
the dataset is often variable-rich but sample-starved: a regime where the
number of acquired samples (statistical replicates) is far fewer than the
number of observed variables (genes, neurons, voxels, or chemical
constituents). Much of recent work has focused on understanding the
computational complexity of proposed methods for "Big Data." Sample complexity
however has received relatively less attention, especially in the setting when
the sample size is fixed, and the dimension grows without bound. To
address this gap, we develop a unified statistical framework that explicitly
quantifies the sample complexity of various inferential tasks. Sampling regimes
can be divided into several categories: 1) the classical asymptotic regime
where the variable dimension is fixed and the sample size goes to infinity; 2)
the mixed asymptotic regime where both variable dimension and sample size go to
infinity at comparable rates; 3) the purely high dimensional asymptotic regime
where the variable dimension goes to infinity and the sample size is fixed.
Each regime has its niche but only the latter regime applies to exa-scale data
dimension. We illustrate this high dimensional framework for the problem of
correlation mining, where it is the matrix of pairwise and partial correlations
among the variables that are of interest. We demonstrate various regimes of
correlation mining based on the unifying perspective of high dimensional
learning rates and sample complexity for different structured covariance models
and different inference tasks
Performance analysis and optimal selection of large mean-variance portfolios under estimation risk
We study the consistency of sample mean-variance portfolios of arbitrarily
high dimension that are based on Bayesian or shrinkage estimation of the input
parameters as well as weighted sampling. In an asymptotic setting where the
number of assets remains comparable in magnitude to the sample size, we provide
a characterization of the estimation risk by providing deterministic
equivalents of the portfolio out-of-sample performance in terms of the
underlying investment scenario. The previous estimates represent a means of
quantifying the amount of risk underestimation and return overestimation of
improved portfolio constructions beyond standard ones. Well-known for the
latter, if not corrected, these deviations lead to inaccurate and overly
optimistic Sharpe-based investment decisions. Our results are based on recent
contributions in the field of random matrix theory. Along with the asymptotic
analysis, the analytical framework allows us to find bias corrections improving
on the achieved out-of-sample performance of typical portfolio constructions.
Some numerical simulations validate our theoretical findings
- …