1,946 research outputs found

    Ground state degeneracy of non-Abelian topological phases from coupled wires

    Full text link
    We construct a family of two-dimensional non-Abelian topological phases from coupled wires using a non-Abelian bosonization approach. We then demonstrate how to determine the nature of the non-Abelian topological order (in particular, the anyonic excitations and the topological degeneracy on the torus) realized in the resulting gapped phases of matter. This paper focuses on the detailed case study of a coupled-wire realization of the bosonic su(2)2su(2)^{\,}_{2} Moore-Read state, but the approach we outline here can be extended to general bosonic su(2)ksu(2)^{\,}_{k} topological phases described by non-Abelian Chern-Simons theories. We also discuss possible generalizations of this approach to the construction of three-dimensional non-Abelian topological phases.Comment: 33 pages, 3 figures. v3 replaces previous discussion of 3D case with an outlook. Published versio

    Representing Power/Power of Representing

    Get PDF

    The Emotional Impact of Audio - Visual Stimuli

    Get PDF
    Induced affect is the emotional effect of an object on an individual. It can be quantified through two metrics: valence and arousal. Valance quantifies how positive or negative something is, while arousal quantifies the intensity from calm to exciting. These metrics enable researchers to study how people opine on various topics. Affective content analysis of visual media is a challenging problem due to differences in perceived reactions. Industry standard machine learning classifiers such as Support Vector Machines can be used to help determine user affect. The best affect-annotated video datasets are often analyzed by feeding large amounts of visual and audio features through machine-learning algorithms. The goal is to maximize accuracy, with the hope that each feature will bring useful information to the table. We depart from this approach to quantify how different modalities such as visual, audio, and text description information can aid in the understanding affect. To that end, we train independent models for visual, audio and text description. Each are convolutional neural networks paired with support vector machines to classify valence and arousal. We also train various ensemble models that combine multi-modal information with the hope that the information from independent modalities benefits each other. We find that our visual network alone achieves state-of-the-art valence classification accuracy and that our audio network, when paired with our visual, achieves competitive results on arousal classification. Each network is much stronger on one metric than the other. This may lead to more sophisticated multimodal approaches to accurately identifying affect in video data. This work also contributes to induced emotion classification by augmenting existing sizable media datasets and providing a robust framework for classifying the same

    Comparing 2HDM ++ Scalar and Pseudoscalar Simplified Models at LHC

    Full text link
    In this work we compare the current experimental LHC limits of the 2HDM ++ scalar and pseudoscalar for the ttˉt \bar{t}, mono-ZZ and mono-hh signatures and forecast the reach of future LHC upgrades for the mono-ZZ channel. Furthermore, we comment on the possibility, in case of a signal detection, to discriminate between the two models. The 2HDM+S and 2HDM+PS are two notable examples of the so-called next generation of Dark Matter Simplified Models. They allow for a renormalizable coupling of fermionic, Standard Model singlet, Dark Matter with a two Higgs doublet sector, through the mixing of the latter with a scalar or pseudoscalar singlet.Comment: 26 pages, 14 figure
    corecore