137 research outputs found

    Latest Results from the Heidelberg-Moscow Double Beta Decay Experiment

    Get PDF
    New results for the double beta decay of 76Ge are presented. They are extracted from Data obtained with the HEIDELBERG-MOSCOW, which operates five enriched 76Ge detectors in an extreme low-level environment in the GRAN SASSO. The two neutrino accompanied double beta decay is evaluated for the first time for all five detectors with a statistical significance of 47.7 kg y resulting in a half life of (T_(1/2))^(2nu) = [1.55 +- 0.01 (stat) (+0.19) (-0.15) (syst)] x 10^(21) years. The lower limit on the half-life of the 0nu beta-beta decay obtained with pulse shape analysis is (T_(1/2))^(0_nu) > 1.9 x 10^(25) [3.1 x 10^(25)] years with 90% C.L. (68% C.L.) (with 35.5 kg y). This results in an upper limit of the effective Majorana neutrino mass of 0.35 eV (0.27 eV). No evidence for a Majoron emitting decay mode or for the neutrinoless mode is observed.Comment: 14 pages, revtex, 6 figures, Talk was presented at third International Conference ' Dark Matter in Astro and Particle Physics' - DARK2000, to be publ. in Proc. of DARK2000, Springer (2000). Please look into our HEIDELBERG Non-Accelerator Particle Physics group home page: http://www.mpi-hd.mpg.de/non_acc

    On the Quantitative Impact of the Schechter-Valle Theorem

    Full text link
    We evaluate the Schechter-Valle (Black Box) theorem quantitatively by considering the most general Lorentz invariant Lagrangian consisting of point-like operators for neutrinoless double beta decay. It is well known that the Black Box operators induce Majorana neutrino masses at four-loop level. This warrants the statement that an observation of neutrinoless double beta decay guarantees the Majorana nature of neutrinos. We calculate these radiatively generated masses and find that they are many orders of magnitude smaller than the observed neutrino masses and splittings. Thus, some lepton number violating New Physics (which may at tree-level not be related to neutrino masses) may induce Black Box operators which can explain an observed rate of neutrinoless double beta decay. Although these operators guarantee finite Majorana neutrino masses, the smallness of the Black Box contributions implies that other neutrino mass terms (Dirac or Majorana) must exist. If neutrino masses have a significant Majorana contribution then this will become the dominant part of the Black Box operator. However, neutrinos might also be predominantly Dirac particles, while other lepton number violating New Physics dominates neutrinoless double beta decay. Translating an observed rate of neutrinoless double beta decay into neutrino masses would then be completely misleading. Although the principal statement of the Schechter-Valle theorem remains valid, we conclude that the Black Box diagram itself generates radiatively only mass terms which are many orders of magnitude too small to explain neutrino masses. Therefore, other operators must give the leading contributions to neutrino masses, which could be of Dirac or Majorana nature.Comment: 18 pages, 4 figures; v2: minor corrections, reference added, matches journal version; v3: typo corrected, physics result and conclusions unchange

    A Large Scale Double Beta and Dark Matter Experiment: GENIUS

    Full text link
    The recent results from the HEIDELBERG-MOSCOW experiment have demonstrated the large potential of double beta decay to search for new physics beyond the Standard Model. To increase by a major step the present sensitivity for double beta decay and dark matter search much bigger source strengths and much lower backgrounds are needed than used in experiments under operation at present or under construction. We present here a study of a project proposed recently, which would operate one ton of 'naked' enriched GErmanium-detectors in liquid NItrogen as shielding in an Underground Setup (GENIUS). It improves the sensitivity to neutrino masses to 0.01 eV. A ten ton version would probe neutrino masses even down to 10^-3 eV. The first version would allow to test the atmospheric neutrino problem, the second at least part of the solar neutrino problem. Both versions would allow in addition significant contributions to testing several classes of GUT models. These are especially tests of R-parity breaking supersymmetry models, leptoquark masses and mechanism and right-handed W-boson masses comparable to LHC. The second issue of the experiment is the search for dark matter in the universe. The entire MSSM parameter space for prediction of neutralinos as dark matter particles could be covered already in a first step of the full experiment - with the same purity requirements but using only 100 kg of 76Ge or even of natural Ge - making the experiment competitive to LHC in the search for supersymmetry. The layout of the proposed experiment is discussed and the shielding and purity requirements are studied using GEANT Monte Carlo simulations. As a demonstration of the feasibility of the experiment first results of operating a 'naked' Ge detector in liquid nitrogen are presented.Comment: 22 pages, 12 figures, see also http://pluto.mpi-hd.mpg.de/~betalit/genius.htm

    Radiative contribution to neutrino masses and mixing in μν\mu\nuSSM

    Full text link
    In an extension of the minimal supersymmetric standard model (popularly known as the μν\mu\nuSSM), three right handed neutrino superfields are introduced to solve the μ\mu-problem and to accommodate the non-vanishing neutrino masses and mixing. Neutrino masses at the tree level are generated through RR-parity violation and seesaw mechanism. We have analyzed the full effect of one-loop contributions to the neutrino mass matrix. We show that the current three flavour global neutrino data can be accommodated in the μν\mu\nuSSM, for both the tree level and one-loop corrected analyses. We find that it is relatively easier to accommodate the normal hierarchical mass pattern compared to the inverted hierarchical or quasi-degenerate case, when one-loop corrections are included.Comment: 51 pages, 14 figures (58 .eps files), expanded introduction, other minor changes, references adde

    Constraining New Physics with a Positive or Negative Signal of Neutrino-less Double Beta Decay

    Full text link
    We investigate numerically how accurately one could constrain the strengths of different short-range contributions to neutrino-less double beta decay in effective field theory. Depending on the outcome of near-future experiments yielding information on the neutrino masses, the corresponding bounds or estimates can be stronger or weaker. A particularly interesting case, resulting in strong bounds, would be a positive signal of neutrino-less double beta decay that is consistent with complementary information from neutrino oscillation experiments, kinematical determinations of the neutrino mass, and measurements of the sum of light neutrino masses from cosmological observations. The keys to more robust bounds are improvements of the knowledge of the nuclear physics involved and a better experimental accuracy.Comment: 23 pages, 3 figures. Minor changes. Matches version published in JHE

    Scientific and technical guidance for the preparation and presentation of a dossier for evaluation of an infant and/or follow-on formula manufactured from protein hydrolysates (Revision 1)

    Get PDF
    Following a request from the European Commission, EFSA was asked to provide scientific and technical guidance for the preparation and presentation of a dossierfor evaluation of an infant and/or follow-on formula manufactured from protein hydrolysates. This guidance document addresses the information and data to be submitted to EFSA on infant and follow-on formulae manufactured from protein hydrolysates with respect to the nutritional safety and suitability of the specific formula and/or the formula's efficacy in reducing the risk of developing allergy to milk proteins. The guidance will be further reviewed and updated with the experience gained from the evaluation of specificdossiers, and in the light of applicable Unionguidelines and legislation. The guidance was adopted by the Panel on Dietetic Products, Nutrition and Allergies on 5 April 2017.Upon request from the European Commission in 2020, it has been revised to inform food business operators of the new provisions in the pre-submission phase and in the procedure set out in the General Food Law, as amended by the Transparency Regulation. This revised guidance applies to all dossiers submitted as of 27 March 2021 and shall be consulted for the preparation of dossiers intended to be submitted from that date onwards. For dossiers submitted prior to 27 March 2021, the previous guidance, published in May 2017 remains applicable

    Non-Linear Population Firing Rates and Voltage Sensitive Dye Signals in Visual Areas 17 and 18 to Short Duration Stimuli

    Get PDF
    Visual stimuli of short duration seem to persist longer after the stimulus offset than stimuli of longer duration. This visual persistence must have a physiological explanation. In ferrets exposed to stimuli of different durations we measured the relative changes in the membrane potentials with a voltage sensitive dye and the action potentials of populations of neurons in the upper layers of areas 17 and 18. For durations less than 100 ms, the timing and amplitude of the firing and membrane potentials showed several non-linear effects. The ON response became truncated, the OFF response progressively reduced, and the timing of the OFF responses progressively delayed the shorter the stimulus duration. The offset of the stimulus elicited a sudden and strong negativity in the time derivative of the dye signal. All these non-linearities could be explained by the stimulus offset inducing a sudden inhibition in layers II–III as indicated by the strongly negative time derivative of the dye signal. Despite the non-linear behavior of the layer II–III neurons the sum of the action potentials, integrated from the peak of the ON response to the peak of the OFF response, was almost linearly related to the stimulus duration

    Office and 24-hour heart rate and target organ damage in hypertensive patients

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>We investigated the association between heart rate and its variability with the parameters that assess vascular, renal and cardiac target organ damage.</p> <p>Methods</p> <p>A cross-sectional study was performed including a consecutive sample of 360 hypertensive patients without heart rate lowering drugs (aged 56 ± 11 years, 64.2% male). Heart rate (HR) and its standard deviation (HRV) in clinical and 24-hour ambulatory monitoring were evaluated. Renal damage was assessed by glomerular filtration rate and albumin/creatinine ratio; vascular damage by carotid intima-media thickness and ankle/brachial index; and cardiac damage by the Cornell voltage-duration product and left ventricular mass index.</p> <p>Results</p> <p>There was a positive correlation between ambulatory, but not clinical, heart rate and its standard deviation with glomerular filtration rate, and a negative correlation with carotid intima-media thickness, and night/day ratio of systolic and diastolic blood pressure. There was no correlation with albumin/creatinine ratio, ankle/brachial index, Cornell voltage-duration product or left ventricular mass index. In the multiple linear regression analysis, after adjusting for age, the association of glomerular filtration rate and intima-media thickness with ambulatory heart rate and its standard deviation was lost. According to the logistic regression analysis, the predictors of any target organ damage were age (OR = 1.034 and 1.033) and night/day systolic blood pressure ratio (OR = 1.425 and 1.512). Neither 24 HR nor 24 HRV reached statistical significance.</p> <p>Conclusions</p> <p>High ambulatory heart rate and its variability, but not clinical HR, are associated with decreased carotid intima-media thickness and a higher glomerular filtration rate, although this is lost after adjusting for age.</p> <p>Trial Registration</p> <p>ClinicalTrials.gov: <a href="http://www.clinicaltrials.gov/ct2/show/NCT01325064">NCT01325064</a></p

    Matrix models and sensitivity analysis of populations classified by age and stage : a vec-permutation matrix approach

    Get PDF
    © The Author(s), 2011. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Theoretical Ecology 5 (2012): 403-417, doi:10.1007/s12080-011-0132-2.Matrix population models in which individuals are classified by both age and stage can be constructed using the vec-permutation matrix. The resulting age-stage models can be used to derive the age-specific consequences of a stage-specific life history or to describe populations in which the vital rates respond to both age and stage. I derive a general formula for the sensitivity of any output (scalar, vector, or matrix-valued) of the model, to any vector of parameters, using matrix calculus. The matrices describing age-stage dynamics are almost always reducible; I present results giving conditions under which population growth is ergodic from any initial condition. As an example, I analyze a published stage-specific model of Scotch broom (Cytisus scoparius), an invasive perennial shrub. Sensitivity analysis of the population growth rate finds that the selection gradients on adult survival do not always decrease with age but may increase over a range of ages. This may have implications for the evolution of senescence in stage-classified populations. I also derive and analyze the joint distribution of age and stage at death and present a sensitivity analysis of this distribution and of the marginal distribution of age at death.This research was supported by National Science Foundation Grant DEB-0816514 and by a Research Award from the Alexander von Humboldt Foundation
    corecore