48 research outputs found
rp-Process weak-interaction mediated rates of waiting-point nuclei
Electron capture and positron decay rates are calculated for
neutron-deficient Kr and Sr waiting point nuclei in stellar matter. The
calculation is performed within the framework of pn-QRPA model for rp-process
conditions. Fine tuning of particle-particle, particle-hole interaction
parameters and a proper choice of the deformation parameter resulted in an
accurate reproduction of the measured half-lives. The same model parameters
were used to calculate stellar rates. Inclusion of measured Gamow-Teller
strength distributions finally led to a reliable calculation of weak rates that
reproduced the measured half-lives well under limiting conditions. For the
rp-process conditions, electron capture and positron decay rates on Kr
and Sr are of comparable magnitude whereas electron capture rates on
Sr and Kr are 1--2 orders of magnitude bigger than the
corresponding positron decay rates. The pn-QRPA calculated electron capture
rates on Kr are bigger than previously calculated. The present
calculation strongly suggests that, under rp-process conditions, electron
capture rates form an integral part of weak-interaction mediated rates and
should not be neglected in nuclear reaction network calculations as done
previously.Comment: 13 pages, 4 figures, 4 tables; Astrophysics and Space Science (2012
Ground and excited states Gamow-Teller strength distributions of iron isotopes and associated capture rates for core-collapse simulations
This paper reports on the microscopic calculation of ground and excited
states Gamow-Teller (GT) strength distributions, both in the electron capture
and electron decay direction, for Fe. The associated electron and
positron capture rates for these isotopes of iron are also calculated in
stellar matter. These calculations were recently introduced and this paper is a
follow-up which discusses in detail the GT strength distributions and stellar
capture rates of key iron isotopes. The calculations are performed within the
framework of the proton-neutron quasiparticle random phase approximation
(pn-QRPA) theory. The pn-QRPA theory allows a microscopic
\textit{state-by-state} calculation of GT strength functions and stellar
capture rates which greatly increases the reliability of the results. For the
first time experimental deformation of nuclei are taken into account. In the
core of massive stars isotopes of iron, Fe, are considered to be
key players in decreasing the electron-to-baryon ratio () mainly via
electron capture on these nuclide. The structure of the presupernova star is
altered both by the changes in and the entropy of the core material.
Results are encouraging and are compared against measurements (where possible)
and other calculations. The calculated electron capture rates are in overall
good agreement with the shell model results. During the presupernova evolution
of massive stars, from oxygen shell burning stages till around end of
convective core silicon burning, the calculated electron capture rates on
Fe are around three times bigger than the corresponding shell model
rates. The calculated positron capture rates, however, are suppressed by two to
five orders of magnitude.Comment: 18 pages, 12 figures, 10 table
Pion and Sigma Polarizabilities and Radiative Transitions
Fermilab E781 plans measurements of gamma-Sigma and -pion
interactions using a 600 GeV beam of Sigmas and pions, and a virtual photon
target. Pion polarizabilities and radiative transitions will be measured in
this experiment. The former can test a precise prediction of chiral symmetry;
the latter for a_1(1260) ----> pi + gamma is important for understanding the
polarizability. The experiment also measures polarizabilities and radiative
transitions for Sigma hyperons. The polarizabilities can test predictions of
baryon chiral perturbation theory. The radiative transitions to the
Sigma*(1385) provide a measure of the magnetic moment of the s-quark. Previous
experimental and theoretical results for gamma-pi and gamma-Sigma interactions
are given. The E781 experiment is described.Comment: 13 pages text (tex), Tel Aviv U. Preprint TAUP 2204-94, uses
Springer-Verlag TEX macro package lecproc.cmm (appended at end of tex file,
following \byebye), which requires extracting lecproc.cmm and putting this
file in your directory in addition to the tex file (mmcd.tex) before tex
processing. lecproc.cmm should be used following instructions and guidelines
available from Springer-Verlag. Submitted to the Proceedings of Workshop on
Chiral Dynamics, Massachusetts Institute of Technology, July 1994, Eds. A.
Bernstein, B. Holstein. Replaced Oct. 4 to add TAUP preprint number. Replaced
Oct. 12 to correct Pb target thickness from 1.3% interaction to 0.3
Hadronic contributions to of the leptons and to the effective fine structure constant
The hadronic contributions to the anomalous magnetic moments of the leptons
and to the effective fine structure constant at the Z-mass are reevaluated
using all presently available data.Comment: 36 pages, 11 Postscript figures, available at
ftp://129.129.40.58/pub/preprints/vapogm2.ps.g
Recommended from our members
Averaging in the presence of sliding errors
In many cases the precision with which an experiment can measure a physical quantity depends on the value of that quantity. Not having access to the true value, experimental groups are forced to assign their errors based on their own measured value. Procedures which attempt to derive an improved estimate of the true value by a suitable average of such measurements usually weight each experiment`s measurement according to the reported variance. However, one is in a position to derive improved error estimates for each experiment from the average itself, provided an approximate idea of the functional dependence of the error on the central value is known. Failing to do so can lead to substantial biases. Techniques which avoid these biases without loss of precision are proposed and their performance is analyzed with examples. These techniques are quite general and can bring about an improvement even when the behavior of the errors is not well understood. Perhaps the most important application of the technique is in fitting curves to histograms
LECTURES ON PROBABILITY AND STATISTICS
These notes are based on a set of statistics lectures delivered at Imperial College to the first-year postgraduate students in High Energy Physics. They are designed for the professional experimental scientist. They begin with the fundamentals of probability theory, in which one makes statements about the set of possible outcomes of an experiment, based upon a complete a priori understanding of the experiment. For example, in a roll of a set of (fair) dice, one understands a priori that any given side of each die is equally likely to turn up. From that, we can calculate the probabilty of any specified outcome. They finish with the inverse problem, statistics. Here, one begins with a set of actual data (e.g., the outcomes of a number of rolls of the dice), and attempts to make inferences about the state of nature which gave those data (e.g., the likelihood of seeing any given side of any given die turn up). This is a much more difficult problem, of course, and one's solutions often turn out to be unsatisfactory in one respect or another. Hopefully, the reader will come away from these notes with a feel for some of the problems and uncertainties involved. Although there are standard approaches, most of the time there is no cut and dried ''best'' solution - ''best'' according to every criterion
Recommended from our members
Off-line analysis software for the Texas Test Rig
Data analysis for the TTR requires integrating a large number of muon chamber technologies, each with different requirements, into a single analysis chain. Many of these technologies come with their own software, which have different conventions; these packages are grafted on. Data are stored on a tape robot with essential information stored in a database where it may be queried. Operation is done from special-purpose X{trademark} windows designed to facilitate data selection and its subsequent analysis. Program development was done using the Hewlett-Packard Softbench{trademark} product