94 research outputs found
Trimmed Density Ratio Estimation
Density ratio estimation is a vital tool in both machine learning and
statistical community. However, due to the unbounded nature of density ratio,
the estimation procedure can be vulnerable to corrupted data points, which
often pushes the estimated ratio toward infinity. In this paper, we present a
robust estimator which automatically identifies and trims outliers. The
proposed estimator has a convex formulation, and the global optimum can be
obtained via subgradient descent. We analyze the parameter estimation error of
this estimator under high-dimensional settings. Experiments are conducted to
verify the effectiveness of the estimator.Comment: Made minor revisions. Restructured the introductory section
Trimming Stability Selection increases variable selection robustness
Contamination can severely distort an estimator unless the estimation
procedure is suitably robust. This is a well-known issue and has been addressed
in Robust Statistics, however, the relation of contamination and distorted
variable selection has been rarely considered in literature. As for variable
selection, many methods for sparse model selection have been proposed,
including the Stability Selection which is a meta-algorithm based on some
variable selection algorithm in order to immunize against particular data
configurations. We introduce the variable selection breakdown point that
quantifies the number of cases resp. cells that have to be contaminated in
order to let no relevant variable be detected. We show that particular outlier
configurations can completely mislead model selection and argue why even
cell-wise robust methods cannot fix this problem. We combine the variable
selection breakdown point with resampling, resulting in the Stability Selection
breakdown point that quantifies the robustness of Stability Selection. We
propose a trimmed Stability Selection which only aggregates the models with the
lowest in-sample losses so that, heuristically, models computed on heavily
contaminated resamples should be trimmed away. An extensive simulation study
with non-robust regression and classification algorithms as well as with Sparse
Least Trimmed Squares reveals both the potential of our approach to boost the
model selection robustness as well as the fragility of variable selection using
non-robust algorithms, even for an extremely small cell-wise contamination
rate
Unconventional machine learning of genome-wide human cancer data
Recent advances in high-throughput genomic technologies coupled with
exponential increases in computer processing and memory have allowed us to
interrogate the complex aberrant molecular underpinnings of human disease from
a genome-wide perspective. While the deluge of genomic information is expected
to increase, a bottleneck in conventional high-performance computing is rapidly
approaching. Inspired in part by recent advances in physical quantum
processors, we evaluated several unconventional machine learning (ML)
strategies on actual human tumor data. Here we show for the first time the
efficacy of multiple annealing-based ML algorithms for classification of
high-dimensional, multi-omics human cancer data from the Cancer Genome Atlas.
To assess algorithm performance, we compared these classifiers to a variety of
standard ML methods. Our results indicate the feasibility of using
annealing-based ML to provide competitive classification of human cancer types
and associated molecular subtypes and superior performance with smaller
training datasets, thus providing compelling empirical evidence for the
potential future application of unconventional computing architectures in the
biomedical sciences
- …