557 research outputs found
Risk Factors for In-hospital Nonhemorrhagic Stroke in Patients With Acute Myocardial Infarction Treated With Thrombolysis: Results from GUSTO-I
BACKGROUND: Nonhemorrhagic stroke occurs in 0.1% to 1.3% of patients with
acute myocardial infarction who are treated with thrombolysis, with
substantial associated mortality and morbidity. Little is known about the
risk factors for its occurrence. METHODS AND RESULTS: We studied the 247
patients with nonhemorrhagic stroke who were randomly assigned to one of
four thrombolytic regimens within 6 hours of symptom onset in the GUSTO-I
trial. We assessed the univariable and multivariable baseline risk factors
for nonhemorrhagic stroke and created a scoring nomogram from the baseline
multivariable modeling. We used time-dependent Cox modeling to determine
multivariable in-hospital predictors of nonhemorrhagic stroke. Baseline
and in-hospital predictors were then combined to determine the overall
predictors of nonhemorrhagic stroke. Of the 247 patients, 42 (17%) died
and another 98 (40%) were disabled by 30-day follow-up. Older age was the
most important baseline clinical predictor of nonhemorrhagic stroke,
followed by higher heart rate, history of stroke or transient ischemic
attack, diabetes, previous angina, and history of hypertension. These
factors remained statistically significant predictors in the combined
model, along with worse Killip class, coronary angiography, bypass
surgery, and atrial fibrillation/flutter. CONCLUSIONS: Nonhemorrhagic
stroke is a serious event in patients with acute myocardial infarction who
are treated with thrombolytic, antithrombin, and antiplatelet therapy. We
developed a simple nomogram that can predict the risk of nonhemorrhagic
stroke on the basis of baseline clinical characteristics. Prophylactic
anticoagulation may be an important treatment strategy for patients with
high probability for nonhemorrhagic stroke, but further study is needed
Can forest management based on natural disturbances maintain ecological resilience?
Given the increasingly global stresses on forests, many ecologists argue that managers must maintain ecological resilience: the capacity of ecosystems to absorb disturbances without undergoing fundamental change. In this review we ask: Can the emerging paradigm of natural-disturbance-based management (NDBM) maintain ecological resilience in managed forests? Applying resilience theory requires careful articulation of the ecosystem state under consideration, the disturbances and stresses that affect the persistence of possible alternative states, and the spatial and temporal scales of management relevance. Implementing NDBM while maintaining resilience means recognizing that (i) biodiversity is important for long-term ecosystem persistence, (ii) natural disturbances play a critical role as a generator of structural and compositional heterogeneity at multiple scales, and (iii) traditional management tends to produce forests more homogeneous than those disturbed naturally and increases the likelihood of unexpected catastrophic change by constraining variation of key environmental processes. NDBM may maintain resilience if silvicultural strategies retain the structures and processes that perpetuate desired states while reducing those that enhance resilience of undesirable states. Such strategies require an understanding of harvesting impacts on slow ecosystem processes, such as seed-bank or nutrient dynamics, which in the long term can lead to ecological surprises by altering the forest's capacity to reorganize after disturbance
Results of the Search for Strange Quark Matter and Q-balls with the SLIM Experiment
The SLIM experiment at the Chacaltaya high altitude laboratory was sensitive
to nuclearites and Q-balls, which could be present in the cosmic radiation as
possible Dark Matter components. It was sensitive also to strangelets, i.e.
small lumps of Strange Quark Matter predicted at such altitudes by various
phenomenological models. The analysis of 427 m^2 of Nuclear Track Detectors
exposed for 4.22 years showed no candidate event. New upper limits on the flux
of downgoing nuclearites and Q-balls at the 90% C.L. were established. The null
result also restricts models for strangelets propagation through the Earth
atmosphere.Comment: 14 pages, 11 EPS figure
The Science of Sungrazers, Sunskirters, and Other Near-Sun Comets
This review addresses our current understanding of comets that venture close to the Sun, and are hence exposed to much more extreme conditions than comets that are typically studied from Earth. The extreme solar heating and plasma environments that these objects encounter change many aspects of their behaviour, thus yielding valuable information on both the comets themselves that complements other data we have on primitive solar system bodies, as well as on the near-solar environment which they traverse. We propose clear definitions for these comets: We use the term near-Sun comets to encompass all objects that pass sunward of the perihelion distance of planet Mercury (0.307 AU). Sunskirters are defined as objects that pass within 33 solar radii of the Sun’s centre, equal to half of Mercury’s perihelion distance, and the commonly-used phrase sungrazers to be objects that reach perihelion within 3.45 solar radii, i.e. the fluid Roche limit. Finally, comets with orbits that intersect the solar photosphere are termed sundivers. We summarize past studies of these objects, as well as the instruments and facilities used to study them, including space-based platforms that have led to a recent revolution in the quantity and quality of relevant observations. Relevant comet populations are described, including the Kreutz, Marsden, Kracht, and Meyer groups, near-Sun asteroids, and a brief discussion of their origins. The importance of light curves and the clues they provide on cometary composition are emphasized, together with what information has been gleaned about nucleus parameters, including the sizes and masses of objects and their families, and their tensile strengths. The physical processes occurring at these objects are considered in some detail, including the disruption of nuclei, sublimation, and ionisation, and we consider the mass, momentum, and energy loss of comets in the corona and those that venture to lower altitudes. The different components of comae and tails are described, including dust, neutral and ionised gases, their chemical reactions, and their contributions to the near-Sun environment. Comet-solar wind interactions are discussed, including the use of comets as probes of solar wind and coronal conditions in their vicinities. We address the relevance of work on comets near the Sun to similar objects orbiting other stars, and conclude with a discussion of future directions for the field and the planned ground- and space-based facilities that will allow us to address those science topics
Spallation reactions. A successful interplay between modeling and applications
The spallation reactions are a type of nuclear reaction which occur in space
by interaction of the cosmic rays with interstellar bodies. The first
spallation reactions induced with an accelerator took place in 1947 at the
Berkeley cyclotron (University of California) with 200 MeV deuterons and 400
MeV alpha beams. They highlighted the multiple emission of neutrons and charged
particles and the production of a large number of residual nuclei far different
from the target nuclei. The same year R. Serber describes the reaction in two
steps: a first and fast one with high-energy particle emission leading to an
excited remnant nucleus, and a second one, much slower, the de-excitation of
the remnant. In 2010 IAEA organized a worskhop to present the results of the
most widely used spallation codes within a benchmark of spallation models. If
one of the goals was to understand the deficiencies, if any, in each code, one
remarkable outcome points out the overall high-quality level of some models and
so the great improvements achieved since Serber. Particle transport codes can
then rely on such spallation models to treat the reactions between a light
particle and an atomic nucleus with energies spanning from few tens of MeV up
to some GeV. An overview of the spallation reactions modeling is presented in
order to point out the incomparable contribution of models based on basic
physics to numerous applications where such reactions occur. Validations or
benchmarks, which are necessary steps in the improvement process, are also
addressed, as well as the potential future domains of development. Spallation
reactions modeling is a representative case of continuous studies aiming at
understanding a reaction mechanism and which end up in a powerful tool.Comment: 59 pages, 54 figures, Revie
A Measurement of Psi(2S) Resonance Parameters
Cross sections for e+e- to hadons, pi+pi- J/Psi, and mu+mu- have been
measured in the vicinity of the Psi(2S) resonance using the BESII detector
operated at the BEPC. The Psi(2S) total width; partial widths to hadrons,
pi+pi- J/Psi, muons; and corresponding branching fractions have been determined
to be Gamma(total)= (264+-27) keV; Gamma(hadron)= (258+-26) keV, Gamma(mu)=
(2.44+-0.21) keV, and Gamma(pi+pi- J/Psi)= (85+-8.7) keV; and Br(hadron)=
(97.79+-0.15)%, Br(pi+pi- J/Psi)= (32+-1.4)%, Br(mu)= (0.93+-0.08)%,
respectively.Comment: 8 pages, 6 figure
An improved method for measuring muon energy using the truncated mean of dE/dx
The measurement of muon energy is critical for many analyses in large
Cherenkov detectors, particularly those that involve separating
extraterrestrial neutrinos from the atmospheric neutrino background. Muon
energy has traditionally been determined by measuring the specific energy loss
(dE/dx) along the muon's path and relating the dE/dx to the muon energy.
Because high-energy muons (E_mu > 1 TeV) lose energy randomly, the spread in
dE/dx values is quite large, leading to a typical energy resolution of 0.29 in
log10(E_mu) for a muon observed over a 1 km path length in the IceCube
detector. In this paper, we present an improved method that uses a truncated
mean and other techniques to determine the muon energy. The muon track is
divided into separate segments with individual dE/dx values. The elimination of
segments with the highest dE/dx results in an overall dE/dx that is more
closely correlated to the muon energy. This method results in an energy
resolution of 0.22 in log10(E_mu), which gives a 26% improvement. This
technique is applicable to any large water or ice detector and potentially to
large scintillator or liquid argon detectors.Comment: 12 pages, 16 figure
All-particle cosmic ray energy spectrum measured with 26 IceTop stations
We report on a measurement of the cosmic ray energy spectrum with the IceTop
air shower array, the surface component of the IceCube Neutrino Observatory at
the South Pole. The data used in this analysis were taken between June and
October, 2007, with 26 surface stations operational at that time, corresponding
to about one third of the final array. The fiducial area used in this analysis
was 0.122 km^2. The analysis investigated the energy spectrum from 1 to 100 PeV
measured for three different zenith angle ranges between 0{\deg} and 46{\deg}.
Because of the isotropy of cosmic rays in this energy range the spectra from
all zenith angle intervals have to agree. The cosmic-ray energy spectrum was
determined under different assumptions on the primary mass composition. Good
agreement of spectra in the three zenith angle ranges was found for the
assumption of pure proton and a simple two-component model. For zenith angles
{\theta} < 30{\deg}, where the mass dependence is smallest, the knee in the
cosmic ray energy spectrum was observed between 3.5 and 4.32 PeV, depending on
composition assumption. Spectral indices above the knee range from -3.08 to
-3.11 depending on primary mass composition assumption. Moreover, an indication
of a flattening of the spectrum above 22 PeV were observed.Comment: 38 pages, 17 figure
Measurement of the B0-anti-B0-Oscillation Frequency with Inclusive Dilepton Events
The - oscillation frequency has been measured with a sample of
23 million \B\bar B pairs collected with the BABAR detector at the PEP-II
asymmetric B Factory at SLAC. In this sample, we select events in which both B
mesons decay semileptonically and use the charge of the leptons to identify the
flavor of each B meson. A simultaneous fit to the decay time difference
distributions for opposite- and same-sign dilepton events gives ps.Comment: 7 pages, 1 figure, submitted to Physical Review Letter
- …