23 research outputs found

    A Compromise between Neutrino Masses and Collider Signatures in the Type-II Seesaw Model

    Full text link
    A natural extension of the standard SU(2)L×U(1)YSU(2)_{\rm L} \times U(1)_{\rm Y} gauge model to accommodate massive neutrinos is to introduce one Higgs triplet and three right-handed Majorana neutrinos, leading to a 6×66\times 6 neutrino mass matrix which contains three 3×33\times 3 sub-matrices MLM_{\rm L}, MDM_{\rm D} and MRM_{\rm R}. We show that three light Majorana neutrinos (i.e., the mass eigenstates of νe\nu_e, νμ\nu_\mu and ντ\nu_\tau) are exactly massless in this model, if and only if ML=MDMR−1MDTM_{\rm L} = M_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T exactly holds. This no-go theorem implies that small but non-vanishing neutrino masses may result from a significant but incomplete cancellation between MLM_{\rm L} and MDMR−1MDTM_{\rm D} M_{\rm R}^{-1} M_{\rm D}^T terms in the Type-II seesaw formula, provided three right-handed Majorana neutrinos are of O(1){\cal O}(1) TeV and experimentally detectable at the LHC. We propose three simple Type-II seesaw scenarios with the A4×U(1)XA_4 \times U(1)_{\rm X} flavor symmetry to interpret the observed neutrino mass spectrum and neutrino mixing pattern. Such a TeV-scale neutrino model can be tested in two complementary ways: (1) searching for possible collider signatures of lepton number violation induced by the right-handed Majorana neutrinos and doubly-charged Higgs particles; and (2) searching for possible consequences of unitarity violation of the 3×33\times 3 neutrino mixing matrix in the future long-baseline neutrino oscillation experiments.Comment: RevTeX 19 pages, no figure

    Dynamic Supervised Learning: Some Basic Issues and Application Aspects

    No full text

    Incremental Learning

    No full text

    ForestNet – Automatic Design of Sparse Multilayer Perceptron Network Architectures Using Ensembles of Randomized Trees

    No full text
    In this paper, we introduce a mechanism for designing the architecture of a Sparse Multi-Layer Perceptron network, for classification, called ForestNet. Networks built using our approach are capable of handling high-dimensional data and learning representations of both visual and non-visual data. The proposed approach first builds an ensemble of randomized trees in order to gather information on the hierarchy of features and their separability among the classes. Subsequently, such information is used to design the architecture of a sparse network, for a specific data set and application. The number of neurons is automatically adapted to the dataset. The proposed approach was evaluated using two non-visual and two visual datasets. For each dataset, 4 ensembles of randomized trees with different sizes were built. In turn, per ensemble, a sparse network architecture was designed using our approach and a fully connected network with same architecture was also constructed. The sparse networks defined using our approach consistently outperformed their respective tree ensembles, achieving statistically significant improvements in classification accuracy. While we do not beat state-of-art results with our network size and the lack of data augmentation techniques, our method exhibits very promising results, as the sparse networks performed similarly to their fully connected counterparts with a reduction of more than 98% of connections in the visual tasks

    Learning Sparse Features with an Auto-Associator

    Get PDF
    International audienceA major issue in statistical machine learning is the design of a representa-tion, or feature space, facilitating the resolution of the learning task at hand. Sparse representations in particular facilitate discriminant learning: On the one hand, they are robust to noise. On the other hand, they disentangle the factors of variation mixed up in dense representations, favoring the separa-bility and interpretation of data. This chapter focuses on auto-associators (AAs), i.e. multi-layer neural networks trained to encode/decode the data and thus de facto defining a feature space. AAs, first investigated in the 80s, were recently reconsidered as building blocks for deep neural networks. This chapter surveys related work about building sparse representations, and presents a new non-linear explicit sparse representation method referred to as Sparse Auto-Associator (SAA), integrating a sparsity objective within the standard auto-associator learning criterion. The comparative empirical val-idation of SAAs on state-of-art handwritten digit recognition benchmarks shows that SAAs outperform standard auto-associators in terms of classifi-cation performance and yield similar results as denoising auto-associators. Furthermore, SAAs enable to control the representation size to some extent, through a conservative pruning of the feature space

    Machine Learning for digital document processing: from layout analysis to metadata extraction

    No full text
    In the last years, the spread of computers and the Internet caused a significant amount of documents to be available in digital format. Collecting them in digital repositories raised problems that go beyond simple acquisition issues, and cause the need to organize and classify them in order to improve the effectiveness and efficiency of the retrieval procedure. The success of such a process is tightly related to the ability of understanding the semantics of the document components and content. Since the obvious solution of manually creating and maintaining an updated index is clearly infeasible, due to the huge amount of data under consideration, there is a strong interest in methods that can provide solutions for automatically acquiring such a knowledge. This work presents a framework that intensively exploits intelligent techniques to support different tasks of automatic document processing from acquisition to indexing, from categorization to storing and retrieval. The prototypical version of the system DOMINUS is presented, whose main characteristic is the use of a Machine Learning Server, a suite of different inductive learning methods and systems, among which the more suitable for each specific document processing phase is chosen and applied. The core system is the incremental first-order logic learner INTHELEX. Thanks to incrementality, it can continuously update and refine the learned theories, dynamically extending its knowledge to handle even completely new classes of documents. Since DOMINUS is general and flexible, it can be embedded as a document management engine into many different Digital Library systems. Experiments in a real-world domain scenario, scientific conference management, confirmed the good performance of the proposed prototype
    corecore