5,408 research outputs found
Efficient and Robust Signal Detection Algorithms for the Communication Applications
Signal detection and estimation has been prevalent in signal processing and communications for many years. The relevant studies deal with the processing of information-bearing signals for the purpose of information extraction. Nevertheless, new robust and efficient signal detection and estimation techniques are still in demand since there emerge more and more practical applications which rely on them. In this dissertation work, we proposed several novel signal detection schemes for wireless communications applications, such as source localization algorithm, spectrum sensing method, and normality test. The associated theories and practice in robustness, computational complexity, and overall system performance evaluation are also provided
A taxonomy framework for unsupervised outlier detection techniques for multi-type data sets
The term "outlier" can generally be defined as an observation that is significantly different from
the other values in a data set. The outliers may be instances of error or indicate events. The
task of outlier detection aims at identifying such outliers in order to improve the analysis of
data and further discover interesting and useful knowledge about unusual events within numerous
applications domains. In this paper, we report on contemporary unsupervised outlier detection
techniques for multiple types of data sets and provide a comprehensive taxonomy framework and
two decision trees to select the most suitable technique based on data set. Furthermore, we
highlight the advantages, disadvantages and performance issues of each class of outlier detection
techniques under this taxonomy framework
Statistical Inference for Structured High-dimensional Models
High-dimensional statistical inference is a newly emerged direction of statistical science in the 21 century. Its importance is due to the increasing dimensionality and complexity of models needed to process and understand the modern real world data. The main idea making possible meaningful inference about such models is to assume suitable lower dimensional underlying structure or low-dimensional approximations, for which the error can be reasonably controlled. Several types of such structures have been recently introduced including sparse high-dimensional regression, sparse and/or low rank matrix models, matrix completion models, dictionary learning, network models (stochastic block model, mixed membership models) and more. The workshop focused on recent developments in structured sequence and regression models, matrix and tensor estimation, robustness, statistical learning in complex settings, network data, and topic models
Recommended from our members
Covariate-assisted ranking and screening for large-scale two-sample inference
Two-sample multiple testing has a wide range of applications. The conventionalpractice first reduces the original observations to a vector of p-values and then chooses a cutoffto adjust for multiplicity. However, this data reduction step could cause significant loss ofinformation and thus lead to suboptimal testing procedures.We introduce a new framework fortwo-sample multiple testing by incorporating a carefully constructed auxiliary variable in inferenceto improve the power. A data-driven multiple-testing procedure is developed by employinga covariate-assisted ranking and screening (CARS) approach that optimally combines the informationfrom both the primary and the auxiliary variables. The proposed CARS procedureis shown to be asymptotically valid and optimal for false discovery rate control. The procedureis implemented in the R package CARS. Numerical results confirm the effectiveness of CARSin false discovery rate control and show that it achieves substantial power gain over existingmethods. CARS is also illustrated through an application to the analysis of a satellite imagingdata set for supernova detection
Approaches for Outlier Detection in Sparse High-Dimensional Regression Models
Modern regression studies often encompass a very large number of potential predictors,
possibly larger than the sample size, and sometimes growing with the sample
size itself. This increases the chances that a substantial portion of the predictors
is redundant, as well as the risk of data contamination. Tackling these problems is
of utmost importance to facilitate scientific discoveries, since model estimates are
highly sensitive both to the choice of predictors and to the presence of outliers. In
this thesis, we contribute to this area considering the problem of robust model selection
in a variety of settings, where outliers may arise both in the response and
the predictors. Our proposals simplify model interpretation, guarantee predictive
performance, and allow us to study and control the influence of outlying cases on
the fit.
First, we consider the co-occurrence of multiple mean-shift and variance-inflation
outliers in low-dimensional linear models. We rely on robust estimation techniques
to identify outliers of each type, exclude mean-shift outliers, and use restricted
maximum likelihood estimation to down-weight and accommodate variance-inflation
outliers into the model fit. Second, we extend our setting to high-dimensional linear
models. We show that mean-shift and variance-inflation outliers can be modeled as
additional fixed and random components, respectively, and evaluated independently.
Specifically, we perform feature selection and mean-shift outlier detection through
a robust class of nonconcave penalization methods, and variance-inflation outlier
detection through the penalization of the restricted posterior mode. The resulting
approach satisfies a robust oracle property for feature selection in the presence of
data contamination – which allows the number of features to exponentially increase
with the sample size – and detects truly outlying cases of each type with asymptotic
probability one. This provides an optimal trade-off between a high breakdown point
and efficiency. Third, focusing on high-dimensional linear models affected by meanshift
outliers, we develop a general framework in which L0-constraints coupled with
mixed-integer programming techniques are used to perform simultaneous feature
selection and outlier detection with provably optimal guarantees. In particular,
we provide necessary and sufficient conditions for a robustly strong oracle property,
where again the number of features can increase exponentially with the sample size,
and prove optimality for parameter estimation and the resulting breakdown point.
Finally, we consider generalized linear models and rely on logistic slippage to perform
outlier detection and removal in binary classification. Here we use L0-constraints
and mixed-integer conic programming techniques to solve the underlying double
combinatorial problem of feature selection and outlier detection, and the framework
allows us again to pursue optimality guarantees.
For all the proposed approaches, we also provide computationally lean heuristic
algorithms, tuning procedures, and diagnostic tools which help to guide the analysis.
We consider several real-world applications, including the study of the relationships
between childhood obesity and the human microbiome, and of the main drivers of
honey bee loss. All methods developed and data used, as well as the source code to
replicate our analyses, are publicly available
- …