560 research outputs found
Growth, electronic and electrical characterization of Ge-Rich Ge-Sb-Te alloy
In this study, we deposit a Ge-rich Ge-Sb-Te alloy by physical vapor deposition (PVD) in the amorphous phase on silicon substrates. We study in-situ, by X-ray and ultraviolet photoemission spectroscopies (XPS and UPS), the electronic properties and carefully ascertain the alloy composition to be GST 29 20 28. Subsequently, Raman spectroscopy is employed to corroborate the results from the photoemission study. X-ray diffraction is used upon annealing to study the crystallization of such an alloy and identify the effects of phase separation and segregation of crystalline Ge with the formation of grains along the [111] direction, as expected for such Ge-rich Ge-Sb-Te alloys. In addition, we report on the electrical characterization of single memory cells containing the Ge-rich Ge-Sb-Te alloy, including I-V characteristic curves, programming curves, and SET and RESET operation performance, as well as upon annealing temperature. A fair alignment of the electrical parameters with the current state-of-the-art of conventional (GeTe)n-(Sb2Te3)m alloys, deposited by PVD, is found, but with enhanced thermal stability, which allows for data retention up to 230 °C
Adaptation to flood risk - results of international paired flood event studies
As flood impacts are increasing in large parts of the world, understanding the primary drivers of changes in risk is essential for effective adaptation. To gain more knowledge on the basis of empirical case studies, we analyze eight paired floods, that is, consecutive flood events that occurred in the same region, with the second flood causing significantly lower damage. These success stories of risk reduction were selected across different socioeconomic and hydroâclimatic contexts. The potential of societies to adapt is uncovered by describing triggered societal changes, as well as formal measures and spontaneous processes that reduced flood risk. This novel approach has the potential to build the basis for an international data collection and analysis effort to better understand and attribute changes in risk due to hydrological extremes in the framework of the IAHSs Panta Rhei initiative. Across all case studies, we find that lower damage caused by the second event was mainly due to significant reductions in vulnerability, for example, via raised risk awareness, preparedness, and improvements of organizational emergency management. Thus, vulnerability reduction plays an essential role for successful adaptation. Our work shows that there is a high potential to adapt, but there remains the challenge to stimulate measures that reduce vulnerability and risk in periods in which extreme events do not occur
Adaptation to flood risk: Results of international paired flood event studies
As flood impacts are increasing in large parts of the world, understanding the primary drivers
of changes in risk is essential for effective adaptation. To gain more knowledge on the basis of empirical
case studies, we analyze eight paired floods, that is, consecutive flood events that occurred in the same
region, with the second flood causing significantly lower damage. These success stories of risk reduction
were selected across different socioeconomic and hydro-climatic contexts. The potential of societies to
adapt is uncovered by describing triggered societal changes, as well as formal measures and spontaneous
processes that reduced flood risk. This novel approach has the potential to build the basis for an
international data collection and analysis effort to better understand and attribute changes in risk due to
hydrological extremes in the framework of the IAHSs Panta Rhei initiative. Across all case studies, we find
that lower damage caused by the second event was mainly due to significant reductions in vulnerability,
for example, via raised risk awareness, preparedness, and improvements of organizational emergency
management. Thus, vulnerability reduction plays an essential role for successful adaptation. Our work
shows that there is a high potential to adapt, but there remains the challenge to stimulate measures that
reduce vulnerability and risk in periods in which extreme events do not occur
Dynamic configuration of the CMS Data Acquisition cluster
The CMS Data Acquisition cluster, which runs around 10000 applications, is configured dynamically at run time. XML configuration documents determine what applications are executed on each node and over what networks these applications communicate. Through this mechanism the DAQ System may be adapted to the required performance, partitioned in order to perform (test-) runs in parallel, or re-structured in case of hardware faults. This paper presents the CMS DAQ Configurator tool, which is used to generate comprehensive configurations of the CMS DAQ system based on a high-level description given by the user. Using a database of configuration templates and a database containing a detailed model of hardware modules, data and control links, nodes and the network topology, the tool automatically determines which applications are needed, on which nodes they should run, and over which networks the event traffic will flow. The tool computes application parameters and generates the XML configuration documents as well as the configuration of the run-control system. The performance of the tool and operational experience during CMS commissioning and the first LHC runs are discussed
Performance of the CMS Cathode Strip Chambers with Cosmic Rays
The Cathode Strip Chambers (CSCs) constitute the primary muon tracking device
in the CMS endcaps. Their performance has been evaluated using data taken
during a cosmic ray run in fall 2008. Measured noise levels are low, with the
number of noisy channels well below 1%. Coordinate resolution was measured for
all types of chambers, and fall in the range 47 microns to 243 microns. The
efficiencies for local charged track triggers, for hit and for segments
reconstruction were measured, and are above 99%. The timing resolution per
layer is approximately 5 ns
An explanation of the Z-track sources
We present an explanation of the Z-track phenomenon based on spectral fitting
of RXTE observations of GX340+0 using the emission model previously shown to
describe the dipping LMXB. In our Z-track model, the soft apex is a quiescent
state of the source with lowest luminosity. Moving away from this point by
ascending the normal branch the strongly increasing luminosity of the Accretion
Disc Corona (ADC) Comptonized emission L_ADC provides substantial evidence for
a large increase of mass accretion rate Mdot. There are major changes in the
neutron star blackbody emission, kT increasing to high values, the blackbody
radius R_BB decreasing, these changes continuing monotonically on both normal
and horizontal branches. The blackbody flux increases by a factor of ten to
three times the Eddington flux so that the physics of the horizontal branch is
dominated by the high radiation pressure of the neutron star, which we propose
disrupts the inner disc, and an increase of column density is detected. We
further propose that the very strong radiation pressure is responsible for the
launching of the jets detected in radio on the horizontal branch. On the
flaring branch, we find that L_ADC is constant, suggesting no change in Mdot so
that flaring must consist of unstable nuclear burning. At the soft apex, the
mass accretion rate per unit area on the neutron star m_dot is minimum for the
horizontal and normal branches and about equal to the theoretical upper limit
for unstable burning. Thus it is possible that unstable burning begins as soon
as the source arrives at this position, the onset being consistent with theory.
The large increase in R_BB in flaring is reminiscent of radius expansion in
X-ray bursts. Finally, in our model, Mdot does not increase monotonically along
the Z-track as often previously thought.Comment: 14 pages, 8 figures, accepted for publication in Astronomy and
Astrophysic
An analysis of the control hierarchy modeling of the CMS detector control system
The supervisory level of the Detector Control System (DCS) of the CMS experiment is implemented using Finite State Machines (FSM), which model the behaviours and control the operations of all the sub-detectors and support services. The FSM tree of the whole CMS experiment consists of more than 30.000 nodes. An analysis of a system of such size is a complex task but is a crucial step towards the improvement of the overall performance of the FSM system. This paper presents the analysis of the CMS FSM system using the micro Common Representation Language 2 (mcrl2) methodology. Individual mCRL2 models are obtained for the FSM systems of the CMS sub-detectors using the ASF+SDF automated translation tool. Different mCRL2 operations are applied to the mCRL2 models. A mCRL2 simulation tool is used to closer examine the system. Visualization of a system based on the exploration of its state space is enabled with a mCRL2 tool. Requirements such as command and state propagation are expressed using modal mu-calculus and checked using a model checking algorithm. For checking local requirements such as endless loop freedom, the Bounded Model Checking technique is applied. This paper discusses these analysis techniques and presents the results of their application on the CMS FSM system
Performance and Operation of the CMS Electromagnetic Calorimeter
The operation and general performance of the CMS electromagnetic calorimeter
using cosmic-ray muons are described. These muons were recorded after the
closure of the CMS detector in late 2008. The calorimeter is made of lead
tungstate crystals and the overall status of the 75848 channels corresponding
to the barrel and endcap detectors is reported. The stability of crucial
operational parameters, such as high voltage, temperature and electronic noise,
is summarised and the performance of the light monitoring system is presented
Next Generation Molecular Diagnosis of Hereditary Spastic Paraplegias: An Italian Cross-Sectional Study
Hereditary spastic paraplegia (HSP) refers to a group of genetically heterogeneous neurodegenerative motor neuron disorders characterized by progressive age-dependent loss of corticospinal motor tract function, lower limb spasticity, and weakness. Recent clinical use of next generation sequencing (NGS) methodologies suggests that they facilitate the diagnostic approach to HSP, but the power of NGS as a first-tier diagnostic procedure is unclear. The larger-than-expected genetic heterogeneity-there are over 80 potential disease-associated genes-and frequent overlap with other clinical conditions affecting the motor system make a molecular diagnosis in HSP cumbersome and time consuming. In a single-center, cross-sectional study, spanning 4 years, 239 subjects with a clinical diagnosis of HSP underwent molecular screening of a large set of genes, using two different customized NGS panels. The latest version of our targeted sequencing panel (SpastiSure3.0) comprises 118 genes known to be associated with HSP. Using an in-house validated bioinformatics pipeline and several in silico tools to predict mutation pathogenicity, we obtained a positive diagnostic yield of 29% (70/239), whereas variants of unknown significance (VUS) were found in 86 patients (36%), and 83 cases remained unsolved. This study is among the largest screenings of consecutive HSP index cases enrolled in real-life clinical-diagnostic settings. Its results corroborate NGS as a modern, first-step procedure for molecular diagnosis of HSP. It also disclosed a significant number of new mutations in ultra-rare genes, expanding the clinical spectrum, and genetic landscape of HSP, at least in Italy
- âŚ