10,479 research outputs found

    CP-Violating Phases in the MSSM

    Get PDF
    We combine experimental bounds on the electric dipole moments of the neutron and electron with cosmological limits on the relic density of a gaugino-type LSP neutralino to constrain certain CP-violating phases appearing in the MSSM. We find that in the Constrained MSSM, the phase |\theta_\mu | < \pi/10, while the phase \theta_A remains essentially unconstrained.Comment: Summary of a talk presented at SUSY-96, College Park, Maryland, USA (May 1996), 4 pages in LaTeX including 4 embedded postscript figures, uses epsf.sty, espcrc2.st

    On the zero of the fermion zero mode

    Full text link
    We argue that the fermionic zero mode in non-trivial gauge field backgrounds must have a zero. We demonstrate this explicitly for calorons where its location is related to a constituent monopole. Furthermore a topological reasoning for the existence of the zero is given which therefore will be present for any non-trivial configuration. We propose the use of this property in particular for lattice simulations in order to uncover the topological content of a configuration.Comment: 6 pages, 3 figures in 5 part

    Anomalies in the effective theory of heavy quarks

    Full text link
    The question of the anomalies in the effective theory of heavy quarks is investigated at two different levels. Firstly, it is shown that none of the symmetries of this effective theory contains an anomaly. The existence of a new `Îł5 \gamma _ 5 '-symmetry is pointed out and shown to be also anomaly free. Secondly, it is shown that the chiral anomaly of QCD is not reproduced in the effective lagrangian for the heavy quarks, thus contradicting 't Hooft's anomaly matching condition. Finally, the effective theory of heavy quarks is derived from the QCD lagrangian in such a way that the terms leading to the anomaly are included. For this derivation the generating functional method is used.Comment: 16 pages, UB-ECM-PF-92/1

    Exploration of the MSSM with Non-Universal Higgs Masses

    Get PDF
    We explore the parameter space of the minimal supersymmetric extension of the Standard Model (MSSM), allowing the soft supersymmetry-breaking masses of the Higgs multiplets, m_{1,2}, to be non-universal (NUHM). Compared with the constrained MSSM (CMSSM) in which m_{1,2} are required to be equal to the soft supersymmetry-breaking masses m_0 of the squark and slepton masses, the Higgs mixing parameter mu and the pseudoscalar Higgs mass m_A, which are calculated in the CMSSM, are free in the NUHM model. We incorporate accelerator and dark matter constraints in determining allowed regions of the (mu, m_A), (mu, M_2) and (m_{1/2}, m_0) planes for selected choices of the other NUHM parameters. In the examples studied, we find that the LSP mass cannot be reduced far below its limit in the CMSSM, whereas m_A may be as small as allowed by LEP for large tan \beta. We present in Appendices details of the calculations of neutralino-slepton, chargino-slepton and neutralino-sneutrino coannihilation needed in our exploration of the NUHM.Comment: 92 pages LaTeX, 32 eps figures, final version, some changes to figures pertaining to the b to s gamma constrain

    Accelerator Constraints on Neutralino Dark Matter

    Get PDF
    The constraints on neutralino dark matter \chi obtained from accelerator searches at LEP, the Fermilab Tevatron and elsewhere are reviewed, with particular emphasis on results from LEP 1.5. These imply within the context of the minimal supersymmetric extension of the Standard Model that m_\chi \ge 21.4 GeV if universality is assumed, and yield for large tan\beta a significantly stronger bound than is obtained indirectly from Tevatron limits on the gluino mass. We update this analysis with preliminary results from the first LEP 2W run, and also preview the prospects for future sparticle searches at the LHC.Comment: Presented by J. Ellis at the Workshop on the Identification of Dark Matter, Sheffield, September, 1996. 14 pages; Latex; 12 Fig

    Diagnosis of alcoholism based on neural network analysis of phenotypic risk factors

    Get PDF
    BACKGROUND: Alcoholism is a serious public health problem. It has both genetic and environmental causes. In an effort to gain understanding of the underlying genetic susceptibility to alcoholism, a long-term study has been undertaken. The Collaborative Study on the Genetics of Alcoholism (COGA) provides a rich source of genetic and phenotypic data. One ongoing problem is the difficulty of reliably diagnosing alcoholism, despite many known risk factors and measurements. We have applied a well known pattern-matching method, neural network analysis, to phenotypic data provided to participants in Genetic Analysis Workshop 14 by COGA. The aim is to train the network to recognize complex phenotypic patterns that are characteristic of those with alcoholism as well as those who are free of symptoms. Our results indicate that this approach may be helpful in the diagnosis of alcoholism. RESULTS: Training and testing of input/output pairs of risk factors by means of a "feed-forward back-propagation" neural network resulted in reliability of about 94% in predicting the presence or absence of alcoholism based on 36 input phenotypic risk factors. Pruning the neural network to remove relatively uninformative factors resulted in a reduced network of 14 input factors that was still 95% reliable. Some of the factors selected by the pruning steps have been identified as traits that show either linkage or association to potential candidate regions. CONCLUSION: The complex, multivariate picture formed by known risk factors for alcoholism can be incorporated into a neural network analysis that reliably predicts the presence or absence of alcoholism about 94–95% of the time. Several characteristics that were identified by a pruned neural network have previously been shown to be important in this disease based on more traditional linkage and association studies. Neural networks therefore provide one less traditional approach to both identifying alcoholic individuals and determining the most informative risk factors

    Risk factors for coronary artery disease and the use of neural networks to predict the presence or absence of high blood pressure

    Get PDF
    BACKGROUND: The Framingham Heart Study was initiated in 1948 as a long-term longitudinal study to identify risk factors associated with cardiovascular disease (CVD). Over the years the scope of the study has expanded to include offspring and other family members of the original cohort, marker data useful for gene mapping and information on other diseases. As a result, it is a rich resource for many areas of research going beyond the original goals. As part of the Genetic Analysis Workshop 13, we used data from the study to evaluate the ability of neural networks to use CVD risk factors as training data for predictions of normal and high blood pressure. RESULTS: Applying two different strategies to the coding of CVD risk data as risk factors (one longitudinal and one independent of time), we found that neural networks could not be trained to clearly separate individuals into normal and high blood pressure groups. When training was successful, validation was not, suggesting over-fitting of the model. When the number of parameters was reduced, training was not as good. An analysis of the input data showed that the neural networks were, in fact, finding consistent patterns, but that these patterns were not correlated with the presence or absence of high blood pressure. CONCLUSION: Neural network analysis, applied to risk factors for CVD in the Framingham data, did not lead to a clear classification of individuals into groups with normal and high blood pressure. Thus, although high blood pressure may itself be a risk factor for CVD, it does not appear to be clearly predictable using observations from a set of other CVD risk factors

    Chromosome mapping: radiation hybrid data and stochastic spin models

    Full text link
    This work approaches human chromosome mapping by developing algorithms for ordering markers associated with radiation hybrid data. Motivated by recent work of Boehnke et al. [1], we formulate the ordering problem by developing stochastic spin models to search for minimum-break marker configurations. As a particular application, the methods developed are applied to 14 human chromosome-21 markers tested by Cox et al. [2]. The methods generate configurations consistent with the best found by others. Additionally, we find that the set of low-lying configurations is described by a Markov-like ordering probability distribution. The distribution displays cluster correlations reflecting closely linked loci.Comment: 26 Pages, uuencoded LaTex, Submitted to Phys. Rev. E, [email protected], [email protected]
    • …
    corecore