8,842 research outputs found

    Seesaw mirroring between light and heavy Majorana neutrinos with the help of the S3S^{}_3 reflection symmetry

    Full text link
    In the canonical seesaw mechanism we require the relevant neutrino mass terms to be invariant under the S3S^{}_3 charge-conjugation transformations of left- and right-handed neutrino fields. Then both the Dirac mass matrix MDM^{}_{\rm D} and the right-handed neutrino mass matrix MRM^{}_{\rm R} are well constrained, so is the effective light Majorana neutrino mass matrix MΞ½M^{}_\nu via the seesaw formula. We find that these mass matrices can be classified into 22 categories, among which some textures respect the well-known ΞΌ\mu-Ο„\tau permutation or reflection symmetry and flavor democracy. It is also found that there exist remarkable structural equalities or similarities between MΞ½M^{}_\nu and MRM^{}_{\rm R}, reflecting a seesaw mirroring relationship between light and heavy Majorana neutrinos. We calculate the corresponding light neutrino masses and flavor mixing parameters as well as the CP-violating asymmetries in decays of the lightest heavy Majorana neutrino, and show that only the flavored leptogenesis mechanism is possible to work for three categories of MDM^{}_{\rm D} and MRM^{}_{\rm R} in the S3S^{}_3 reflection symmetry limit.Comment: 33 pages, 1 table. v2: matches the version accepted for publication in JHE

    On the Unitarity Triangles of the CKM Matrix

    Get PDF
    The unitarity triangles of the 3Γ—33\times 3 Cabibbo-Kobayashi-Maskawa (CKM) matrix are studied in a systematic way. We show that the phases of the nine CKM rephasing invariants are indeed the outer angles of the six unitarity triangles and measurable in the CPCP-violating decay modes of BdB_{d} and BsB_{s} mesons. An economical notation system is introduced for describing properties of the unitarity triangles. To test unitarity of the CKM matrix we present some approximate but useful relations among the sides and angles of the unitarity triangles, which can be confronted with the accessible experiments of quark mixing and CPCP violation.Comment: 9 Latex pages; LMU-07/94 and PVAMU-HEP-94-5 (A few minor changes are made, accepted for publication in Phys. Lett. B

    An Affect-Rich Neural Conversational Model with Biased Attention and Weighted Cross-Entropy Loss

    Full text link
    Affect conveys important implicit information in human communication. Having the capability to correctly express affect during human-machine conversations is one of the major milestones in artificial intelligence. In recent years, extensive research on open-domain neural conversational models has been conducted. However, embedding affect into such models is still under explored. In this paper, we propose an end-to-end affect-rich open-domain neural conversational model that produces responses not only appropriate in syntax and semantics, but also with rich affect. Our model extends the Seq2Seq model and adopts VAD (Valence, Arousal and Dominance) affective notations to embed each word with affects. In addition, our model considers the effect of negators and intensifiers via a novel affective attention mechanism, which biases attention towards affect-rich words in input sentences. Lastly, we train our model with an affect-incorporated objective function to encourage the generation of affect-rich words in the output responses. Evaluations based on both perplexity and human evaluations show that our model outperforms the state-of-the-art baseline model of comparable size in producing natural and affect-rich responses.Comment: AAAI-1

    EEG-Based Emotion Recognition Using Regularized Graph Neural Networks

    Full text link
    Electroencephalography (EEG) measures the neuronal activities in different brain regions via electrodes. Many existing studies on EEG-based emotion recognition do not fully exploit the topology of EEG channels. In this paper, we propose a regularized graph neural network (RGNN) for EEG-based emotion recognition. RGNN considers the biological topology among different brain regions to capture both local and global relations among different EEG channels. Specifically, we model the inter-channel relations in EEG signals via an adjacency matrix in a graph neural network where the connection and sparseness of the adjacency matrix are inspired by neuroscience theories of human brain organization. In addition, we propose two regularizers, namely node-wise domain adversarial training (NodeDAT) and emotion-aware distribution learning (EmotionDL), to better handle cross-subject EEG variations and noisy labels, respectively. Extensive experiments on two public datasets, SEED and SEED-IV, demonstrate the superior performance of our model than state-of-the-art models in most experimental settings. Moreover, ablation studies show that the proposed adjacency matrix and two regularizers contribute consistent and significant gain to the performance of our RGNN model. Finally, investigations on the neuronal activities reveal important brain regions and inter-channel relations for EEG-based emotion recognition

    The breaking of flavor democracy in the quark sector

    Full text link
    The democracy of quark flavors is a well-motivated flavor symmetry, but it must be properly broken in order to explain the observed quark mass spectrum and flavor mixing pattern. We reconstruct the texture of flavor democracy breaking and evaluate its strength in a novel way, by assuming a parallelism between the Q=+2/3 and Q=-1/3 quark sectors and using a nontrivial parametrization of the flavor mixing matrix. Some phenomenological implications of such democratic quark mass matrices, including their variations in the hierarchy basis and their evolution from the electroweak scale to a superhigh-energy scale, are also discussed.Comment: 14 pages. References added. Accepted for publication in Chinese Phys.

    Collaborative Spatio-temporal Feature Learning for Video Action Recognition

    Full text link
    Spatio-temporal feature learning is of central importance for action recognition in videos. Existing deep neural network models either learn spatial and temporal features independently (C2D) or jointly with unconstrained parameters (C3D). In this paper, we propose a novel neural operation which encodes spatio-temporal features collaboratively by imposing a weight-sharing constraint on the learnable parameters. In particular, we perform 2D convolution along three orthogonal views of volumetric video data,which learns spatial appearance and temporal motion cues respectively. By sharing the convolution kernels of different views, spatial and temporal features are collaboratively learned and thus benefit from each other. The complementary features are subsequently fused by a weighted summation whose coefficients are learned end-to-end. Our approach achieves state-of-the-art performance on large-scale benchmarks and won the 1st place in the Moments in Time Challenge 2018. Moreover, based on the learned coefficients of different views, we are able to quantify the contributions of spatial and temporal features. This analysis sheds light on interpretability of the model and may also guide the future design of algorithm for video recognition.Comment: CVPR 201
    • …
    corecore