18 research outputs found
Thermal neutron background at Laboratorio Subterráneo de Canfranc (LSC)
The thermal neutron background at Laboratorio Subterráneo de Canfranc (LSC) has been determined using several He proportional counter detectors. Bare and Cd shielded counters were used in a series of long measurements. Pulse shape discrimination techniques were applied to discriminate between neutron and gamma signals as well as other intrinsic contributions. Montecarlo simulations allowed us to estimate the sensitivity of the detectors and calculate values for the background flux of thermal neutrons inside Hall-A of LSC. The obtained value is (3.5±0.8)×10 n/cms, and is within an order of magnitude compared to similar facilities.This work was supported partially by the Spanish Ministerio de
Ciencia e Innovación and its Plan Nacional de I+D+i de Física de
Partículas projects: FPA2016-76765-P and FPA2018-096717-B-C21.
The authors want to acknowledge the help provided by the staff at LSC
in the preparation and support for this work
CLYC as a neutron detector in low background conditions
We report on the thermal neutron flux measurements carried out at the Laboratorio Subterráneo de Canfranc (LSC) with two commercial 2 × 2 CLYC detectors. The measurements were performed as part of an experimental campaign at LSC with He detectors, for establishing the sensitivity limits and use of CLYCs in low background conditions. A careful characterization of the intrinsic α and γ -ray background in the detectors was required and done with dedicated measurements. It was found that the α activities in the two CLYC crystals differ by a factor of three, and the use of Monte Carlo simulations and a Bayesian unfolding method allowed us to determine the specific α activities from the U and Th decay chains. The simulations and unfolding also revealed that the γ -ray background registered in the detectors is dominated by the intrinsic activity of the components of the detector such as the aluminum housing and photo-multiplier and that the activity within the crystal is low in comparison. The data from the neutron flux measurements with the two detectors were analyzed with different methodologies: one based on an innovative α /neutron pulse shape discrimination method and one based on the subtraction of the intrinsic α background that masks the neutron signals in the region of interest. The neutron sensitivity of the CLYCs was calculated by Monte Carlo simulations with MCNP6 and GEANT4. The resulting thermal neutron fluxes are in good agreement with complementary flux measurement performed with He detectors, but close to the detection limit imposed by the intrinsic α activity
Pushing the high count rate limits of scintillation detectors for challenging neutron-capture experiments
One of the critical aspects for the accurate determination of neutron capture
cross sections when combining time-of-flight and total energy detector
techniques is the characterization and control of systematic uncertainties
associated to the measuring devices. In this work we explore the most
conspicuous effects associated to harsh count rate conditions: dead-time and
pile-up effects. Both effects, when not properly treated, can lead to large
systematic uncertainties and bias in the determination of neutron cross
sections. In the majority of neutron capture measurements carried out at the
CERN n\_TOF facility, the detectors of choice are the CD
liquid-based either in form of large-volume cells or recently commissioned sTED
detector array, consisting of much smaller-volume modules. To account for the
aforementioned effects, we introduce a Monte Carlo model for these detectors
mimicking harsh count rate conditions similar to those happening at the CERN
n\_TOF 20~m fligth path vertical measuring station. The model parameters are
extracted by comparison with the experimental data taken at the same facility
during 2022 experimental campaign. We propose a novel methodology to consider
both, dead-time and pile-up effects simultaneously for these fast detectors and
check the applicability to experimental data from Au(,),
including the saturated 4.9~eV resonance which is an important component of
normalization for neutron cross section measurements
Innovation through Artificial Intelligence in Triage Systems for Resource Optimization in Future Pandemics
Artificial intelligence (AI) systems are already being used in various healthcare areas. Similarly, they can offer many advantages in hospital emergency services. The objective of this work is to demonstrate that through the novel use of AI, a trained system can be developed to detect patients at potential risk of infection in a new pandemic more quickly than standardized triage systems. This identification would occur in the emergency department, thus allowing for the early implementation of organizational preventive measures to block the chain of transmission. Materials and Methods: In this study, we propose the use of a machine learning system in emergency department triage during pandemics to detect patients at the highest risk of death and infection using the COVID-19 era as an example, where rapid decision making and comprehensive support have becoming increasingly crucial. All patients who consecutively presented to the emergency department were included, and more than 89 variables were automatically analyzed using the extreme gradient boosting (XGB) algorithm. Results: The XGB system demonstrated the highest balanced accuracy at 91.61%. Additionally, it obtained results more quickly than traditional triage systems. The variables that most influenced mortality prediction were procalcitonin level, age, and oxygen saturation, followed by lactate dehydrogenase (LDH) level, C-reactive protein, the presence of interstitial infiltrates on chest X-ray, and D-dimer. Our system also identified the importance of oxygen therapy in these patients. Conclusions: These results highlight that XGB is a useful and novel tool in triage systems for guiding the care pathway in future pandemics, thus following the example set by the well-known COVID-19 pandemic
Status report of the n_TOF facility after the 2nd CERN long shutdown period
Abstract During the second long shutdown period of the CERN accelerator complex (LS2, 2019-2021), several upgrade activities took place at the n_TOF facility. The most important have been the replacement of the spallation target with a next generation nitrogen-cooled lead target. Additionally, a new experimental area, at a very short distance from the target assembly (the NEAR Station) was established. In this paper, the core commissioning actions of the new installations are described. The improvement in the n_TOF infrastructure was accompanied by several detector development projects. All these upgrade actions are discussed, focusing mostly on the future perspectives of the n_TOF facility. Furthermore, some indicative current and future measurements are briefly reported
The n_TOF facility at CERN
The neutron Time-of-Flight facility (n_TOF) is an innovative facility operative since 2001 at CERN, with three experimental areas. In this paper the n_TOF facility will be described, together with the upgrade of the facility during the Long Shutdown 2 at CERN. The main features of the detectors used for capture fission cross section measurements will be presented with perspectives for the future measurements
Pushing the high count rate limits of scintillation detectors for challenging neutron-capture experiments
One of the critical aspects for the accurate determination of neutron capture cross sections when combining time-of-flight and total energy detector techniques is the characterization and control of systematic uncertainties associated to the measuring devices. In this work we explore the most conspicuous effects associated to harsh count rate conditions: dead-time and pile-up effects. Both effects, when not properly treated, can lead to large systematic uncertainties and bias in the determination of neutron cross sections. In the majority of neutron capture measurements carried out at the CERN n_TOF facility, the detectors of choice are the C6D6 liquid-based either in form of large-volume cells or recently commissioned sTED detector array, consisting of much smaller-volume modules. To account for the aforementioned effects, we introduce a Monte Carlo model for these detectors mimicking harsh count rate conditions similar to those happening at the CERN n_TOF 20 m flight path vertical measuring station. The model parameters are extracted by comparison with the experimental data taken at the same facility during 2022 experimental campaign. We propose a novel methodology to consider both, dead-time and pile-up effects simultaneously for these fast detectors and check the applicability to experimental data from Au-197(n, gamma), including the saturated 4.9 eV resonance which is an important component of normalization for neutron cross section measurements
Pushing the high count rate limits of scintillation detectors for challenging neutron-capture experiments
One of the critical aspects for the accurate determination of neutron capture cross sections when combining time-of-flight and total energy detector techniques is the characterization and control of systematic uncertainties associated to the measuring devices. In this work we explore the most conspicuous effects associated to harsh count rate conditions: dead-time and pile-up effects. Both effects, when not properly treated, can lead to large systematic uncertainties and bias in the determination of neutron cross sections. In the majority of neutron capture measurements carried out at the CERN n_TOF facility, the detectors of choice are the C6D6 liquid-based either in form of large-volume cells or recently commissioned sTED detector array, consisting of much smaller-volume modules. To account for the aforementioned effects, we introduce a Monte Carlo model for these detectors mimicking harsh count rate conditions similar to those happening at the CERN n_TOF 20 m flight path vertical measuring station. The model parameters are extracted by comparison with the experimental data taken at the same facility during 2022 experimental campaign. We propose a novel methodology to consider both, dead-time and pile-up effects simultaneously for these fast detectors and check the applicability to experimental data from 197Au(n, γ), including the saturated 4.9 eV resonance which is an important component of normalization for neutron cross section measurements.One of the critical aspects for the accurate determination of neutron capture cross sections when combining time-of-flight and total energy detector techniques is the characterization and control of systematic uncertainties associated to the measuring devices. In this work we explore the most conspicuous effects associated to harsh count rate conditions: dead-time and pile-up effects. Both effects, when not properly treated, can lead to large systematic uncertainties and bias in the determination of neutron cross sections. In the majority of neutron capture measurements carried out at the CERN n_TOF facility, the detectors of choice are the CD liquid-based either in form of large-volume cells or recently commissioned sTED detector array, consisting of much smaller-volume modules. To account for the aforementioned effects, we introduce a Monte Carlo model for these detectors mimicking harsh count rate conditions similar to those happening at the CERN n_TOF 20~m fligth path vertical measuring station. The model parameters are extracted by comparison with the experimental data taken at the same facility during 2022 experimental campaign. We propose a novel methodology to consider both, dead-time and pile-up effects simultaneously for these fast detectors and check the applicability to experimental data from Au(,), including the saturated 4.9~eV resonance which is an important component of normalization for neutron cross section measurements