4,725 research outputs found
Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness
We investigate the learning rate of multiple kernel learning (MKL) with
and elastic-net regularizations. The elastic-net regularization is a
composition of an -regularizer for inducing the sparsity and an
-regularizer for controlling the smoothness. We focus on a sparse
setting where the total number of kernels is large, but the number of nonzero
components of the ground truth is relatively small, and show sharper
convergence rates than the learning rates have ever shown for both and
elastic-net regularizations. Our analysis reveals some relations between the
choice of a regularization function and the performance. If the ground truth is
smooth, we show a faster convergence rate for the elastic-net regularization
with less conditions than -regularization; otherwise, a faster
convergence rate for the -regularization is shown.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1095 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org). arXiv admin note: text overlap with
arXiv:1103.043
Gauge Anomaly associated with the Majorana Fermion in dimensions
Using an elementary method, we show that an odd number of Majorana fermions
in dimensions suffer from a gauge anomaly that is analogous to the
Witten global gauge anomaly. This anomaly cannot be removed without sacrificing
the perturbative gauge invariance. Our construction of higher-dimensional
examples () makes use of the SO(8) instanton on .Comment: 10 pages, uses PTPTeX.cls, the final version to appear in Prog.
Theor. Phy
Fast Convergence Rate of Multiple Kernel Learning with Elastic-net Regularization
We investigate the learning rate of multiple kernel leaning (MKL) with
elastic-net regularization, which consists of an -regularizer for
inducing the sparsity and an -regularizer for controlling the
smoothness. We focus on a sparse setting where the total number of kernels is
large but the number of non-zero components of the ground truth is relatively
small, and prove that elastic-net MKL achieves the minimax learning rate on the
-mixed-norm ball. Our bound is sharper than the convergence rates ever
shown, and has a property that the smoother the truth is, the faster the
convergence rate is.Comment: 21 pages, 0 figur
Transfer Characteristics in Graphene Field-Effect Transistors with Co Contacts
Graphene field-effect transistors with Co contacts as source and drain
electrodes show anomalous distorted transfer characteristics. The anomaly
appears only in short-channel devices (shorter than approximately 3
micrometers) and originates from a contact-induced effect. Band alteration of a
graphene channel by the contacts is discussed as a possible mechanism for the
anomalous characteristics observed.Comment: 10 pages, 3 figures, Appl. Phys. Let
Structure Learning of Partitioned Markov Networks
We learn the structure of a Markov Network between two groups of random
variables from joint observations. Since modelling and learning the full MN
structure may be hard, learning the links between two groups directly may be a
preferable option. We introduce a novel concept called the \emph{partitioned
ratio} whose factorization directly associates with the Markovian properties of
random variables across two groups. A simple one-shot convex optimization
procedure is proposed for learning the \emph{sparse} factorizations of the
partitioned ratio and it is theoretically guaranteed to recover the correct
inter-group structure under mild conditions. The performance of the proposed
method is experimentally compared with the state of the art MN structure
learning methods using ROC curves. Real applications on analyzing
bipartisanship in US congress and pairwise DNA/time-series alignments are also
reported.Comment: Camera Ready for ICML 2016. Fixed some minor typo
- …