1,943 research outputs found

    Gauge Anomaly associated with the Majorana Fermion in 8k+18k+1 dimensions

    Full text link
    Using an elementary method, we show that an odd number of Majorana fermions in 8k+18k+1 dimensions suffer from a gauge anomaly that is analogous to the Witten global gauge anomaly. This anomaly cannot be removed without sacrificing the perturbative gauge invariance. Our construction of higher-dimensional examples (kgeq1k geq1) makes use of the SO(8) instanton on S8S^8.Comment: 10 pages, uses PTPTeX.cls, the final version to appear in Prog. Theor. Phy

    Fast learning rate of multiple kernel learning: Trade-off between sparsity and smoothness

    Full text link
    We investigate the learning rate of multiple kernel learning (MKL) with 1\ell_1 and elastic-net regularizations. The elastic-net regularization is a composition of an 1\ell_1-regularizer for inducing the sparsity and an 2\ell_2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large, but the number of nonzero components of the ground truth is relatively small, and show sharper convergence rates than the learning rates have ever shown for both 1\ell_1 and elastic-net regularizations. Our analysis reveals some relations between the choice of a regularization function and the performance. If the ground truth is smooth, we show a faster convergence rate for the elastic-net regularization with less conditions than 1\ell_1-regularization; otherwise, a faster convergence rate for the 1\ell_1-regularization is shown.Comment: Published in at http://dx.doi.org/10.1214/13-AOS1095 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org). arXiv admin note: text overlap with arXiv:1103.043

    Transfer Characteristics in Graphene Field-Effect Transistors with Co Contacts

    Full text link
    Graphene field-effect transistors with Co contacts as source and drain electrodes show anomalous distorted transfer characteristics. The anomaly appears only in short-channel devices (shorter than approximately 3 micrometers) and originates from a contact-induced effect. Band alteration of a graphene channel by the contacts is discussed as a possible mechanism for the anomalous characteristics observed.Comment: 10 pages, 3 figures, Appl. Phys. Let

    Fast Convergence Rate of Multiple Kernel Learning with Elastic-net Regularization

    Full text link
    We investigate the learning rate of multiple kernel leaning (MKL) with elastic-net regularization, which consists of an 1\ell_1-regularizer for inducing the sparsity and an 2\ell_2-regularizer for controlling the smoothness. We focus on a sparse setting where the total number of kernels is large but the number of non-zero components of the ground truth is relatively small, and prove that elastic-net MKL achieves the minimax learning rate on the 2\ell_2-mixed-norm ball. Our bound is sharper than the convergence rates ever shown, and has a property that the smoother the truth is, the faster the convergence rate is.Comment: 21 pages, 0 figur

    Structure Learning of Partitioned Markov Networks

    Full text link
    We learn the structure of a Markov Network between two groups of random variables from joint observations. Since modelling and learning the full MN structure may be hard, learning the links between two groups directly may be a preferable option. We introduce a novel concept called the \emph{partitioned ratio} whose factorization directly associates with the Markovian properties of random variables across two groups. A simple one-shot convex optimization procedure is proposed for learning the \emph{sparse} factorizations of the partitioned ratio and it is theoretically guaranteed to recover the correct inter-group structure under mild conditions. The performance of the proposed method is experimentally compared with the state of the art MN structure learning methods using ROC curves. Real applications on analyzing bipartisanship in US congress and pairwise DNA/time-series alignments are also reported.Comment: Camera Ready for ICML 2016. Fixed some minor typo

    Super-Linear Convergence of Dual Augmented-Lagrangian Algorithm for Sparsity Regularized Estimation

    Full text link
    We analyze the convergence behaviour of a recently proposed algorithm for regularized estimation called Dual Augmented Lagrangian (DAL). Our analysis is based on a new interpretation of DAL as a proximal minimization algorithm. We theoretically show under some conditions that DAL converges super-linearly in a non-asymptotic and global sense. Due to a special modelling of sparse estimation problems in the context of machine learning, the assumptions we make are milder and more natural than those made in conventional analysis of augmented Lagrangian algorithms. In addition, the new interpretation enables us to generalize DAL to wide varieties of sparse estimation problems. We experimentally confirm our analysis in a large scale 1\ell_1-regularized logistic regression problem and extensively compare the efficiency of DAL algorithm to previously proposed algorithms on both synthetic and benchmark datasets.Comment: 51 pages, 9 figure
    corecore