25,820 research outputs found

    Generalized Spinfoams

    Full text link
    We reconsider the spinfoam dynamics that has been recently introduced, in the generalized Kaminski-Kisielowski-Lewandowski (KKL) version where the foam is not dual to a triangulation. We study the Euclidean as well as the Lorentzian case. We show that this theory can still be obtained as a constrained BF theory satisfying the simplicity constraint, now discretized on a general oriented 2-cell complex. This constraint implies that boundary states admit a (quantum) geometrical interpretation in terms of polyhedra, generalizing the tetrahedral geometry of the simplicial case. We also point out that the general solution to this constraint (imposed weakly) depends on a quantum number r_f in addition to those of loop quantum gravity. We compute the vertex amplitude and recover the KKL amplitude in the Euclidean theory when r_f=0. We comment on the eventual physical relevance of r_f, and the formal way to eliminate it.Comment: 16 pages, 3 figure

    Rumba : a Python framework for automating large-scale recursive internet experiments on GENI and FIRE+

    Get PDF
    It is not easy to design and run Convolutional Neural Networks (CNNs) due to: 1) finding the optimal number of filters (i.e., the width) at each layer is tricky, given an architecture; and 2) the computational intensity of CNNs impedes the deployment on computationally limited devices. Oracle Pruning is designed to remove the unimportant filters from a well-trained CNN, which estimates the filters’ importance by ablating them in turn and evaluating the model, thus delivers high accuracy but suffers from intolerable time complexity, and requires a given resulting width but cannot automatically find it. To address these problems, we propose Approximated Oracle Filter Pruning (AOFP), which keeps searching for the least important filters in a binary search manner, makes pruning attempts by masking out filters randomly, accumulates the resulting errors, and finetunes the model via a multi-path framework. As AOFP enables simultaneous pruning on multiple layers, we can prune an existing very deep CNN with acceptable time cost, negligible accuracy drop, and no heuristic knowledge, or re-design a model which exerts higher accuracy and faster inferenc

    LHC Phenomenology of Type II Seesaw: Nondegenerate Case

    Full text link
    In this paper, we thoroughly investigate the LHC phenomenology of the type II seesaw mechanism for neutrino masses in the nondegenerate case where the triplet scalars of various charge (H±±,H±,H0,A0H^{\pm\pm}, H^\pm, H^0, A^0) have different masses. Compared with the degenerate case, the cascade decays of scalars lead to many new, interesting signal channels. In the positive scenario where MH±±<MH±<MH0/A0M_{H^{\pm\pm}}<M_{H^\pm}<M_{H^0/A^0}, the four-lepton signal is still the most promising discovery channel for the doubly-charged scalars H±±H^{\pm\pm}. The five-lepton signal is crucial to probe the mass spectrum of the scalars, for which, for example, a 5σ5\sigma reach at 14 TeV LHC for MH±=430GeVM_{H^{\pm}}=430 GeV with MH±±=400GeVM_{H^{\pm\pm}}=400 GeV requires an integrated luminosity of 76/fb. And the six-lepton signal can be used to probe the neutral scalars H0/A0H^0/A^0, which are usually hard to detect in the degenerate case. In the negative scenario where MH±±>MH±>MH0/A0M_{H^{\pm\pm}}>M_{H^\pm}>M_{H^0/A^0}, the detection of H±±H^{\pm\pm} is more challenging, when the cascade decay H±±H±W±H^{\pm\pm}\to H^{\pm}W^{\pm*} is dominant. The most important channel is the associated H±H0/A0H^{\pm}H^0/A^0 production in the final state ±ETbbˉbbˉ\ell^\pm\cancel{E}_Tb\bar{b}b\bar{b}, which requires a luminosity of 109/fb for a 5σ5\sigma discovery, while the final state ±ETbbˉτ+τ\ell^\pm\cancel{E}_Tb\bar{b}\tau^+\tau^- is less promising. Moreover, the associated H0A0H^0A^0 production can give same signals as the standard model Higgs pair production. With a much larger cross section, the H0A0H^0A^0 production in the final state bbˉτ+τb\bar{b}\tau^+\tau^- could reach 3σ3\sigma significance at 14 TeV LHC with a luminosity of 300/fb. In summary, with an integrated luminosity of order 500/fb, the triplet scalars can be fully reconstructed at 14 TeV LHC in the negative scenario.Comment: 41 pages, 20 figures, 7 tables. Version 2 accepted by PRD. 41 pages, 18 figures. Main changes are, (1) rewording in secs III and IV, removing 2 figs and quoting ref [34]; (2) a paragraph added before eq (10) to clarify constraints from electroweak precision data; (3) a paper added to ref [11]. No changes in result

    LHC Phenomenology of the Type II Seesaw Mechanism: Observability of Neutral Scalars in the Nondegenerate Case

    Full text link
    This is a sequel to our previous work on LHC phenomenology of the type II seesaw model in the nondegenerate case. In this work, we further study the pair and associated production of the neutral scalars H^0/A^0. We restrict ourselves to the so-called negative scenario characterized by the mass order M_{H^{\pm\pm}}>M_{H^\pm}>M_{H^0/A^0}, in which the H^0/A^0 production receives significant enhancement from cascade decays of the charged scalars H^{\pm\pm},~H^\pm. We consider three important signal channels---b\bar{b}\gamma\gamma, b\bar{b}\tau^+\tau^-, bbˉ+ETb\bar{b}\ell^+\ell^-\cancel{E}_T---and perform detailed simulations. We find that at the 14 TeV LHC with an integrated luminosity of 3000/fb, a 5\sigma mass reach of 151, 150, and 180 GeV, respectively, is possible in the three channels from the pure Drell-Yan H^0A^0 production, while the cascade-decay-enhanced H^0/A^0 production can push the mass limit further to 164, 177, and 200 GeV. The neutral scalars in the negative scenario are thus accessible at LHC run II.Comment: v1: 32 pages, 17 figures, 3 tables. v2: added 2 refs (2nd in [61] and [66]), revised Acknowledgments, and corrected grammatical errors according to proofs; no other change

    WordSup: Exploiting Word Annotations for Character based Text Detection

    Full text link
    Imagery texts are usually organized as a hierarchy of several visual elements, i.e. characters, words, text lines and text blocks. Among these elements, character is the most basic one for various languages such as Western, Chinese, Japanese, mathematical expression and etc. It is natural and convenient to construct a common text detection engine based on character detectors. However, training character detectors requires a vast of location annotated characters, which are expensive to obtain. Actually, the existing real text datasets are mostly annotated in word or line level. To remedy this dilemma, we propose a weakly supervised framework that can utilize word annotations, either in tight quadrangles or the more loose bounding boxes, for character detector training. When applied in scene text detection, we are thus able to train a robust character detector by exploiting word annotations in the rich large-scale real scene text datasets, e.g. ICDAR15 and COCO-text. The character detector acts as a key role in the pipeline of our text detection engine. It achieves the state-of-the-art performance on several challenging scene text detection benchmarks. We also demonstrate the flexibility of our pipeline by various scenarios, including deformed text detection and math expression recognition.Comment: 2017 International Conference on Computer Visio
    corecore