748 research outputs found

    Higgs-gauge unification without tadpoles

    Full text link
    In orbifold gauge theories localized tadpoles can be radiatively generated at the fixed points where U(1) subgroups are conserved. If the Standard Model Higgs fields are identified with internal components of the bulk gauge fields (Higgs-gauge unification) in the presence of these tadpoles the Higgs mass becomes sensitive to the UV cutoff and electroweak symmetry breaking is spoiled. We find the general conditions, based on symmetry arguments, for the absence/presence of localized tadpoles in models with an arbitrary number of dimensions D. We show that in the class of orbifold compactifications based on T^{D-4}/Z_N (D even, N>2) tadpoles are always allowed, while on T^{D-4}/\mathbb Z_2 (arbitrary D) with fermions in arbitrary representations of the bulk gauge group tadpoles can only appear in D=6 dimensions. We explicitly check this with one- and two-loops calculationsComment: 19 pages, 3 figures, axodraw.sty. v2: version to appear in Nucl. Phys.

    Tadpoles and Symmetries in Higgs-Gauge Unification Theories

    Full text link
    In theories with extra dimensions the Standard Model Higgs fields can be identified with internal components of bulk gauge fields (Higgs-gauge unification). The bulk gauge symmetry protects the Higgs mass from quadratic divergences, but at the fixed points localized tadpoles can be radiatively generated if U(1) subgroups are conserved, making the Higgs mass UV sensitive. We show that a global symmetry, remnant of the internal rotation group after orbifold projection, can prevent the generation of such tadpoles. In particular we consider the classes of orbifold compactifications T^d/Z_N (d even, N>2) and T^d/Z_2 (arbitrary d) and show that in the first case tadpoles are always allowed, while in the second they can appear only for d=2 (six dimensions).Comment: 10 pages, based on talks given by M.Q. at String Phenomenology 2004, University of Michigan, Ann Arbor, August 1-6, 2004 and 10th International Symposium on Particles, Strings and Cosmology (PASCOS'04 and Nath Fest), Northeastern University, Boston, August 16-22, 200

    Unitarity of the Leptonic Mixing Matrix

    Get PDF
    We determine the elements of the leptonic mixing matrix, without assuming unitarity, combining data from neutrino oscillation experiments and weak decays. To that end, we first develop a formalism for studying neutrino oscillations in vacuum and matter when the leptonic mixing matrix is not unitary. To be conservative, only three light neutrino species are considered, whose propagation is generically affected by non-unitary effects. Precision improvements within future facilities are discussed as well.Comment: Standard Model radiative corrections to the invisible Z width included. Some numerical results modified at the percent level. Updated with latest bounds on the rare tau decay. Physical conculsions unchange

    Security Evaluation of Support Vector Machines in Adversarial Environments

    Full text link
    Support Vector Machines (SVMs) are among the most popular classification techniques adopted in security applications like malware detection, intrusion detection, and spam filtering. However, if SVMs are to be incorporated in real-world security systems, they must be able to cope with attack patterns that can either mislead the learning algorithm (poisoning), evade detection (evasion), or gain information about their internal parameters (privacy breaches). The main contributions of this chapter are twofold. First, we introduce a formal general framework for the empirical evaluation of the security of machine-learning systems. Second, according to our framework, we demonstrate the feasibility of evasion, poisoning and privacy attacks against SVMs in real-world security problems. For each attack technique, we evaluate its impact and discuss whether (and how) it can be countered through an adversary-aware design of SVMs. Our experiments are easily reproducible thanks to open-source code that we have made available, together with all the employed datasets, on a public repository.Comment: 47 pages, 9 figures; chapter accepted into book 'Support Vector Machine Applications

    Keyed Non-Parametric Hypothesis Tests

    Full text link
    The recent popularity of machine learning calls for a deeper understanding of AI security. Amongst the numerous AI threats published so far, poisoning attacks currently attract considerable attention. In a poisoning attack the opponent partially tampers the dataset used for learning to mislead the classifier during the testing phase. This paper proposes a new protection strategy against poisoning attacks. The technique relies on a new primitive called keyed non-parametric hypothesis tests allowing to evaluate under adversarial conditions the training input's conformance with a previously learned distribution D\mathfrak{D}. To do so we use a secret key κ\kappa unknown to the opponent. Keyed non-parametric hypothesis tests differs from classical tests in that the secrecy of κ\kappa prevents the opponent from misleading the keyed test into concluding that a (significantly) tampered dataset belongs to D\mathfrak{D}.Comment: Paper published in NSS 201

    Neutrino masses from higher than d=5 effective operators

    Get PDF
    We discuss the generation of small neutrino masses from effective operators higher than dimension five, which open new possibilities for low scale see-saw mechanisms. In order to forbid the radiative generation of neutrino mass by lower dimensional operators, extra fields are required, which are charged under a new symmetry. We discuss this mechanism in the framework of a two Higgs doublet model. We demonstrate that the tree level generation of neutrino mass from higher dimensional operators often leads to inverse see-saw scenarios in which small lepton number violating terms are naturally suppressed by the new physics scale. Furthermore, we systematically discuss tree level generalizations of the standard see-saw scenarios from higher dimensional operators. Finally, we point out that higher dimensional operators can also be generated at the loop level. In this case, we obtain the TeV scale as new physics scale even with order one couplings.Comment: 22 pages, 3 figures, 2 tables. Some references adde

    Defensive peripersonal space is modified by a learnt protective posture

    Get PDF
    The Hand Blink Reflex (HBR) is a subcortical defensive response, elicited by the electrical stimulation of the median nerve. HBR increases when the stimulated hand is inside the defensive peripersonalspace (DPPS) of the face. However, the presence of a screen protecting the face could reduce the amplitude of this response. This work aimed to investigate whether the learning of a posture intended to protect the head could modulate the HBR responses. Boxing athletes learn a defensive posture consisting of blocking with arms opponent\u2019s blow towards the face. Two groups were recruited: 13 boxers and 13 people na\uefve to boxing. HBR response was recorded and elicited in three hand positions depending on the distance from the face. A suppression of HBR enhancement in the static position close to the face was observed in boxer group, contrary to the control group. Also, the higher years of practice in boxing, the higher suppression occurred. However, this suppression was not observed when boxers were asked to move the hand up-to/down-from the face. These findings might suggest that the sensorimotor experience related to a previously learnt protective posture can modify the HBR and thus shape the dimension of the DPPS

    Fault Sneaking Attack: a Stealthy Framework for Misleading Deep Neural Networks

    Full text link
    Despite the great achievements of deep neural networks (DNNs), the vulnerability of state-of-the-art DNNs raises security concerns of DNNs in many application domains requiring high reliability.We propose the fault sneaking attack on DNNs, where the adversary aims to misclassify certain input images into any target labels by modifying the DNN parameters. We apply ADMM (alternating direction method of multipliers) for solving the optimization problem of the fault sneaking attack with two constraints: 1) the classification of the other images should be unchanged and 2) the parameter modifications should be minimized. Specifically, the first constraint requires us not only to inject designated faults (misclassifications), but also to hide the faults for stealthy or sneaking considerations by maintaining model accuracy. The second constraint requires us to minimize the parameter modifications (using L0 norm to measure the number of modifications and L2 norm to measure the magnitude of modifications). Comprehensive experimental evaluation demonstrates that the proposed framework can inject multiple sneaking faults without losing the overall test accuracy performance.Comment: Accepted by the 56th Design Automation Conference (DAC 2019
    corecore