109 research outputs found
Come back Marshall, all is forgiven? : Complexity, evolution, mathematics and Marshallian exceptionalism
Marshall was the great synthesiser of neoclassical economics. Yet with his qualified assumption of self-interest, his emphasis on variation in economic evolution and his cautious attitude to the use of mathematics, Marshall differs fundamentally from other leading neoclassical contemporaries. Metaphors inspire more specific analogies and ontological assumptions, and Marshall used the guiding metaphor of Spencerian evolution. But unfortunately, the further development of a Marshallian evolutionary approach was undermined in part by theoretical problems within Spencer's theory. Yet some things can be salvaged from the Marshallian evolutionary vision. They may even be placed in a more viable Darwinian framework.Peer reviewedFinal Accepted Versio
Search for supersymmetry with a dominant R-parity violating LQDbar couplings in e+e- collisions at centre-of-mass energies of 130GeV to 172 GeV
A search for pair-production of supersymmetric particles under the assumption
that R-parity is violated via a dominant LQDbar coupling has been performed
using the data collected by ALEPH at centre-of-mass energies of 130-172 GeV.
The observed candidate events in the data are in agreement with the Standard
Model expectation. This result is translated into lower limits on the masses of
charginos, neutralinos, sleptons, sneutrinos and squarks. For instance, for
m_0=500 GeV/c^2 and tan(beta)=sqrt(2) charginos with masses smaller than 81
GeV/c^2 and neutralinos with masses smaller than 29 GeV/c^2 are excluded at the
95% confidence level for any generation structure of the LQDbar coupling.Comment: 32 pages, 30 figure
Developing a predictive modelling capacity for a climate change-vulnerable blanket bog habitat: Assessing 1961-1990 baseline relationships
Aim: Understanding the spatial distribution of high priority habitats and
developing predictive models using climate and environmental variables to
replicate these distributions are desirable conservation goals. The aim of this
study was to model and elucidate the contributions of climate and topography to
the distribution of a priority blanket bog habitat in Ireland, and to examine how
this might inform the development of a climate change predictive capacity for
peat-lands in Ireland.
Methods: Ten climatic and two topographic variables were recorded for grid
cells with a spatial resolution of 1010 km, covering 87% of the mainland
land surface of Ireland. Presence-absence data were matched to these variables
and generalised linear models (GLMs) fitted to identify the main climatic and
terrain predictor variables for occurrence of the habitat. Candidate predictor
variables were screened for collinearity, and the accuracy of the final fitted GLM
was evaluated using fourfold cross-validation based on the area under the curve
(AUC) derived from a receiver operating characteristic (ROC) plot. The GLM
predicted habitat occurrence probability maps were mapped against the actual
distributions using GIS techniques.
Results: Despite the apparent parsimony of the initial GLM using only climatic
variables, further testing indicated collinearity among temperature and precipitation
variables for example. Subsequent elimination of the collinear variables and
inclusion of elevation data produced an excellent performance based on the AUC
scores of the final GLM. Mean annual temperature and total mean annual
precipitation in combination with elevation range were the most powerful
explanatory variable group among those explored for the presence of blanket
bog habitat.
Main conclusions: The results confirm that this habitat distribution in general
can be modelled well using the non-collinear climatic and terrain variables tested
at the grid resolution used. Mapping the GLM-predicted distribution to the
observed distribution produced useful results in replicating the projected
occurrence of the habitat distribution over an extensive area. The methods
developed will usefully inform future climate change predictive modelling for
Irelan
Sensitivity of a tonne-scale NEXT detector for neutrinoless double-beta decay searches
The Neutrino Experiment with a Xenon TPC (NEXT) searches for the neutrinoless double-beta (0¿ßß) decay of 136Xe using high-pressure xenon gas TPCs with electroluminescent amplification. A scaled-up version of this technology with about 1 tonne of enriched xenon could reach in less than 5 years of operation a sensitivity to the half-life of 0¿ßß decay better than 1027 years, improving the current limits by at least one order of magnitude. This prediction is based on a well-understood background model dominated by radiogenic sources. The detector concept presented here represents a first step on a compelling path towards sensitivity to the parameter space defined by the inverted ordering of neutrino masses, and beyond. [Figure not available: see fulltext.] © 2021, The Author(s)
Radio frequency and DC high voltage breakdown of high pressure helium, argon, and xenon
Motivated by the possibility of guiding daughter ions from double beta decay events to single-ion sensors for barium tagging, the NEXT collaboration is developing a program of R&D to test radio frequency (RF) carpets for ion transport in high pressure xenon gas. This would require carpet functionality in regimes at higher pressures than have been previously reported, implying correspondingly larger electrode voltages than in existing systems. This mode of operation appears plausible for contemporary RF-carpet geometries due to the higher predicted breakdown strength of high pressure xenon relative to low pressure helium, the working medium in most existing RF carpet devices. In this paper we present the first measurements of the high voltage dielectric strength of xenon gas at high pressure and at the relevant RF frequencies for ion transport (in the 10 MHz range), as well as new DC and RF measurements of the dielectric strengths of high pressure argon and helium gases at small gap sizes. We find breakdown voltages that are compatible with stable RF carpet operation given the gas, pressure, voltage, materials and geometry of interest
Boosting background suppression in the NEXT experiment through Richardson-Lucy deconvolution
Next-generation neutrinoless double beta decay experiments aim for half-life sensitivities of ~ 1027 yr, requiring suppressing backgrounds to < 1 count/tonne/yr. For this, any extra background rejection handle, beyond excellent energy resolution and the use of extremely radiopure materials, is of utmost importance. The NEXT experiment exploits differences in the spatial ionization patterns of double beta decay and single-electron events to discriminate signal from background. While the former display two Bragg peak dense ionization regions at the opposite ends of the track, the latter typically have only one such feature. Thus, comparing the energies at the track extremes provides an additional rejection tool. The unique combination of the topology-based background discrimination and excellent energy resolution (1% FWHM at the Q-value of the decay) is the distinguishing feature of NEXT. Previous studies demonstrated a topological background rejection factor of ~ 5 when reconstructing electron-positron pairs in the 208Tl 1.6 MeV double escape peak (with Compton events as background), recorded in the NEXT-White demonstrator at the Laboratorio Subterráneo de Canfranc, with 72% signal efficiency. This was recently improved through the use of a deep convolutional neural network to yield a background rejection factor of ~ 10 with 65% signal efficiency. Here, we present a new reconstruction method, based on the Richardson-Lucy deconvolution algorithm, which allows reversing the blurring induced by electron diffusion and electroluminescence light production in the NEXT TPC. The new method yields highly refined 3D images of reconstructed events, and, as a result, significantly improves the topological background discrimination. When applied to real-data 1.6 MeV e-e+ pairs, it leads to a background rejection factor of 27 at 57% signal efficiency. [Figure not available: see fulltext.]. © 2021, The Author(s)
A Compact Dication Source for Ba Tagging and Heavy Metal Ion Sensor Development
We present a tunable metal ion beam that delivers controllable ion currents
in the picoamp range for testing of dry-phase ion sensors. Ion beams are formed
by sequential atomic evaporation and single or multiple electron impact
ionization, followed by acceleration into a sensing region. Controllability of
the ionic charge state is achieved through tuning of electrode potentials that
influence the retention time in the ionization region. Barium, lead, and cobalt
samples have been used to test the system, with ion currents identified and
quantified using a quadrupole mass analyzer. Realization of a clean
ion beam within a bench-top system represents an important
technical advance toward the development and characterization of barium tagging
systems for neutrinoless double beta decay searches in xenon gas. This system
also provides a testbed for investigation of novel ion sensing methodologies
for environmental assay applications, with dication beams of Pb and
Cd also demonstrated for this purpose
Search for supersymmetry in the photon(s) plus missing energy channels at =161 GeV and 172 GeV
Searches for supersymmetric particles in channels with one or more photons and missing energy have been performed with data collected by the ALEPH detector at LEP. The data consist of 11.1 \pb\ at , 1.1 \pb\ at 170 \gev\ and 9.5 \pb\ at 172 GeV. The \eenunu\ cross se ction is measured. The data are in good agreement with predictions based on the Standard Model, and are used to set upper limits on the cross sections for anomalous photon production. These limits are compared to two different SUSY models and used to set limits on the neutralino mass. A limit of 71 \gevsq\ at 95\% C.L. is set on the mass of the lightest neutralin o ( 3 ns) for the gauge-mediated supersymmetry breaking and LNZ models
Chemicals regulation and precaution: does REACH really incorporate the precautionary principle
Predictors of restenosis after percutaneous coronary intervention using bare-metal stents: a comparison between patients with and without dysglycemia
The objective of this study was to identify intravascular ultrasound (IVUS), angiographic and metabolic parameters related to restenosis in patients with dysglycemia. Seventy consecutive patients (77 lesions) selected according to inclusion and exclusion criteria were evaluated by the oral glucose tolerance test and the determination of insulinemia after a successful percutaneous coronary intervention (PCI) with a bare-metal stent. The degree of insulin resistance was calculated by the homeostasis model assessment of insulin resistance (HOMA-IR). Six-month IVUS and angiogram follow-up were performed. Thirty-nine patients (55.7%) had dysglycemia. The restenosis rate in the dysglycemic group was 37.2 vs 23.5% in the euglycemic group (P = 0.299). The predictors of restenosis using bivariate analysis were reference vessel diameter (RVD): £2.93 mm (RR = 0.54; 95%CI = 0.05-0.78; P = 0.048), stent area (SA): <8.91 mm² (RR = 0.66; 95%CI = 0.24-0.85; P = 0.006), stent volume (SV): <119.75 mm³ (RR = 0.74; 95%CI = 0.38-0.89; P = 0.0005), HOMA-IR: >2.063 (RR = 0.44; 95%CI = 0.14-0.64; P = 0.027), and fasting plasma glucose (FPG): ≤108.8 mg/dL (RR = 0.53; 95%CI = 0.13-0.75; P = 0.046). SV was an independent predictor of restenosis by multivariable analysis. Dysglycemia is a common clinical condition in patients submitted to PCI. The degree of insulin resistance, FPG, RVD, SA, and SV were correlated with restenosis. SV was inversely correlated with an independent predictor of restenosis in patients treated with a bare-metal stent
- …