558 research outputs found
Variable selection for the multicategory SVM via adaptive sup-norm regularization
The Support Vector Machine (SVM) is a popular classification paradigm in
machine learning and has achieved great success in real applications. However,
the standard SVM can not select variables automatically and therefore its
solution typically utilizes all the input variables without discrimination.
This makes it difficult to identify important predictor variables, which is
often one of the primary goals in data analysis. In this paper, we propose two
novel types of regularization in the context of the multicategory SVM (MSVM)
for simultaneous classification and variable selection. The MSVM generally
requires estimation of multiple discriminating functions and applies the argmax
rule for prediction. For each individual variable, we propose to characterize
its importance by the supnorm of its coefficient vector associated with
different functions, and then minimize the MSVM hinge loss function subject to
a penalty on the sum of supnorms. To further improve the supnorm penalty, we
propose the adaptive regularization, which allows different weights imposed on
different variables according to their relative importance. Both types of
regularization automate variable selection in the process of building
classifiers, and lead to sparse multi-classifiers with enhanced
interpretability and improved accuracy, especially for high dimensional low
sample size data. One big advantage of the supnorm penalty is its easy
implementation via standard linear programming. Several simulated examples and
one real gene data analysis demonstrate the outstanding performance of the
adaptive supnorm penalty in various data settings.Comment: Published in at http://dx.doi.org/10.1214/08-EJS122 the Electronic
Journal of Statistics (http://www.i-journals.org/ejs/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Variable selection in quantile regression
Abstract: After its inception in Koenker and Bassett (1978), quantile regression has become an important and widely used technique to study the whole conditional distribution of a response variable and grown into an important tool of applied statistics over the last three decades. In this work, we focus on the variable selection aspect of penalized quantile regression. Under some mild conditions, we demonstrate the oracle properties of the SCAD and adaptive-LASSO penalized quantile regressions. For the SCAD penalty, despite its good asymptotic properties, the corresponding optimization problem is non-convex and, as a result, much harder to solve. In this work, we take advantage of the decomposition of the SCAD penalty function as the difference of two convex functions and propose to solve the corresponding optimization using the Difference Convex Algorithm (DCA)
An acoustic metamaterial lens for acoustic point-to-point communication in air
Acoustic metamaterials have become a novel and effective way to control sound
waves and design acoustic devices. In this study, we design a 3D acoustic
metamaterial lens (AML) to achieve point-to-point acoustic communication in
air: any acoustic source (i.e. a speaker) in air enclosed by such an AML can
produce an acoustic image where the acoustic wave is focused (i.e. the field
intensity is at a maximum, and the listener can receive the information), while
the acoustic field at other spatial positions is low enough that listeners can
hear almost nothing. Unlike a conventional elliptical reflective mirror, the
acoustic source can be moved around inside our proposed AML. Numerical
simulations are given to verify the performance of the proposed AML
Learning to Auto Weight: Entirely Data-driven and Highly Efficient Weighting Framework
Example weighting algorithm is an effective solution to the training bias
problem, however, most previous typical methods are usually limited to human
knowledge and require laborious tuning of hyperparameters. In this paper, we
propose a novel example weighting framework called Learning to Auto Weight
(LAW). The proposed framework finds step-dependent weighting policies
adaptively, and can be jointly trained with target networks without any
assumptions or prior knowledge about the dataset. It consists of three key
components: Stage-based Searching Strategy (3SM) is adopted to shrink the huge
searching space in a complete training process; Duplicate Network Reward (DNR)
gives more accurate supervision by removing randomness during the searching
process; Full Data Update (FDU) further improves the updating efficiency.
Experimental results demonstrate the superiority of weighting policy explored
by LAW over standard training pipeline. Compared with baselines, LAW can find a
better weighting schedule which achieves much more superior accuracy on both
biased CIFAR and ImageNet.Comment: Accepted by AAAI 202
- …