1,102 research outputs found

    Boron Nitride Monolayer: A Strain-Tunable Nanosensor

    Full text link
    The influence of triaxial in-plane strain on the electronic properties of a hexagonal boron-nitride sheet is investigated using density functional theory. Different from graphene, the triaxial strain localizes the molecular orbitals of the boron-nitride flake in its center depending on the direction of the applied strain. The proposed technique for localizing the molecular orbitals that are close to the Fermi level in the center of boron nitride flakes can be used to actualize engineered nanosensors, for instance, to selectively detect gas molecules. We show that the central part of the strained flake adsorbs polar molecules more strongly as compared with an unstrained sheet.Comment: 20 pages, 9 figure

    Electron Cloud and Beam Scrubbing in the LHC

    Get PDF
    An adequate dose of photoelectrons, accelerated by low-intensity proton bunches and hitting the LHC beam screen wall, will substantially reduce secondary emission and avoid the fast build-up of an electron cloud for the nominal LHC beam. The conditioning period of the liner surface can be considerably shortened thanks to secondary electrons, provided heat load and beam stability can be kept under control; for example this may be possible using a special proton beam, including satellite bunches with an intensity of 15-20% of the nominal bunch intensity and a spacing of one or two RF wavelengths. Based on recent measurements of secondary electron emission, on multipacting tests and simulation results, we discuss possible "beam scrubbing" scenarios in the LHC and present an updat

    Beam-Induced Electron Cloud in the LHC and Possible Remedies

    Get PDF
    Synchrotron radiation from proton bunches in the LHC creates photoelectrons at the beam screen wall. These photoelectrons are accelerated towards the positively charged proton bunch and drift across t he beam pipe between successive bunches.When they hit the opposite wall, they generate secondary electrons which can in turn be accelerated by the next bunch if they are slow enough to survive. We sum marize the results of an intensive research program set up atCERN and discuss recent multipacting tests as well as the importance of several key parameters, such as photon reflectivity, photoelectron and secondary electron yield.Then, based on analytic estimates and simulation results, we discuss possible solutions to avoid the fast build-up of an electron cloud with potential implications for bea m stability and heat load on the cryogenic system

    Monte Carlo radiative transfer for the nebular phase of Type Ia supernovae

    Get PDF
    We extend the range of validity of the ARTIS 3D radiative transfer code up to hundreds of days after explosion, when Type Ia supernovae (SNe Ia) are in their nebular phase. To achieve this, we add a non-local thermodynamic equilibrium population and ionization solver, a new multifrequency radiation field model, and a new atomic data set with forbidden transitions. We treat collisions with non-thermal leptons resulting from nuclear decays to account for their contribution to excitation, ionization, and heating. We validate our method with a variety of tests including comparing our synthetic nebular spectra for the well-known one-dimensional W7 model with the results of other studies. As an illustrative application of the code, we present synthetic nebular spectra for the detonation of a sub-Chandrasekhar white dwarf (WD) in which the possible effects of gravitational settling of 22Ne prior to explosion have been explored. Specifically, we compare synthetic nebular spectra for a 1.06 M☉ WD model obtained when 5.5 Gyr of very efficient settling is assumed to a similar model without settling. We find that this degree of 22Ne settling has only a modest effect on the resulting nebular spectra due to increased 58Ni abundance. Due to the high ionization in sub-Chandrasekhar models, the nebular [Ni II] emission remains negligible, while the [Ni III] line strengths are increased and the overall ionization balance is slightly lowered in the model with 22Ne settling. In common with previous studies of sub-Chandrasekhar models at nebular epochs, these models overproduce [Fe III] emission relative to [Fe II] in comparison to observations of normal SNe Ia

    The Democratic Biopolitics of PrEP

    Get PDF
    PrEP (Pre-Exposure Prophylaxis) is a relatively new drug-based HIV prevention technique and an important means to lower the HIV risk of gay men who are especially vulnerable to HIV. From the perspective of biopolitics, PrEP inscribes itself in a larger trend of medicalization and the rise of pharmapower. This article reconstructs and evaluates contemporary literature on biopolitical theory as it applies to PrEP, by bringing it in a dialogue with a mapping of the political debate on PrEP. As PrEP changes sexual norms and subjectification, for example condom use and its meaning for gay subjectivity, it is highly contested. The article shows that the debate on PrEP can be best described with the concepts ‘sexual-somatic ethics’ and ‘democratic biopolitics’, which I develop based on the biopolitical approach of Nikolas Rose and Paul Rabinow. In contrast, interpretations of PrEP which are following governmentality studies or Italian Theory amount to either farfetched or trivial positions on PrEP, when seen in light of the political debate. Furthermore, the article is a contribution to the scholarship on gay subjectivity, highlighting how homophobia and homonormativity haunts gay sex even in liberal environments, and how PrEP can serve as an entry point for the destigmatization of gay sexuality and transformation of gay subjectivity. ‘Biopolitical democratization’ entails making explicit how medical technology and health care relates to sexual subjectification and ethics, to strengthen the voice of (potential) PrEP users in health politics, and to renegotiate the profit and power of Big Pharma

    Different methodological approaches to the assessment of in vivo efficacy of three artemisinin-based combination antimalarial treatments for the treatment of uncomplicated falciparum malaria in African children.

    Get PDF
    BACKGROUND: Use of different methods for assessing the efficacy of artemisinin-based combination antimalarial treatments (ACTs) will result in different estimates being reported, with implications for changes in treatment policy. METHODS: Data from different in vivo studies of ACT treatment of uncomplicated falciparum malaria were combined in a single database. Efficacy at day 28 corrected by PCR genotyping was estimated using four methods. In the first two methods, failure rates were calculated as proportions with either (1a) reinfections excluded from the analysis (standard WHO per-protocol analysis) or (1b) reinfections considered as treatment successes. In the second two methods, failure rates were estimated using the Kaplan-Meier product limit formula using either (2a) WHO (2001) definitions of failure, or (2b) failure defined using parasitological criteria only. RESULTS: Data analysed represented 2926 patients from 17 studies in nine African countries. Three ACTs were studied: artesunate-amodiaquine (AS+AQ, N = 1702), artesunate-sulphadoxine-pyrimethamine (AS+SP, N = 706) and artemether-lumefantrine (AL, N = 518).Using method (1a), the day 28 failure rates ranged from 0% to 39.3% for AS+AQ treatment, from 1.0% to 33.3% for AS+SP treatment and from 0% to 3.3% for AL treatment. The median [range] difference in point estimates between method 1a (reference) and the others were: (i) method 1b = 1.3% [0 to 24.8], (ii) method 2a = 1.1% [0 to 21.5], and (iii) method 2b = 0% [-38 to 19.3].The standard per-protocol method (1a) tended to overestimate the risk of failure when compared to alternative methods using the same endpoint definitions (methods 1b and 2a). It either overestimated or underestimated the risk when endpoints based on parasitological rather than clinical criteria were applied. The standard method was also associated with a 34% reduction in the number of patients evaluated compared to the number of patients enrolled. Only 2% of the sample size was lost when failures were classified on the first day of parasite recurrence and survival analytical methods were used. CONCLUSION: The primary purpose of an in vivo study should be to provide a precise estimate of the risk of antimalarial treatment failure due to drug resistance. Use of survival analysis is the most appropriate way to estimate failure rates with parasitological recurrence classified as treatment failure on the day it occurs

    Combination of electroweak and QCD corrections to single W production at the Fermilab Tevatron and the CERN LHC

    Full text link
    Precision studies of the production of a high-transverse momentum lepton in association with missing energy at hadron colliders require that electroweak and QCD higher-order contributions are simultaneously taken into account in theoretical predictions and data analysis. Here we present a detailed phenomenological study of the impact of electroweak and strong contributions, as well as of their combination, to all the observables relevant for the various facets of the p\smartpap \to {\rm lepton} + X physics programme at hadron colliders, including luminosity monitoring and Parton Distribution Functions constraint, WW precision physics and search for new physics signals. We provide a theoretical recipe to carefully combine electroweak and strong corrections, that are mandatory in view of the challenging experimental accuracy already reached at the Fermilab Tevatron and aimed at the CERN LHC, and discuss the uncertainty inherent the combination. We conclude that the theoretical accuracy of our calculation can be conservatively estimated to be about 2% for standard event selections at the Tevatron and the LHC, and about 5% in the very high WW transverse mass/lepton transverse momentum tails. We also provide arguments for a more aggressive error estimate (about 1% and 3%, respectively) and conclude that in order to attain a one per cent accuracy: 1) exact mixed O(ααs){\cal O}(\alpha \alpha_s) corrections should be computed in addition to the already available NNLO QCD contributions and two-loop electroweak Sudakov logarithms; 2) QCD and electroweak corrections should be coherently included into a single event generator.Comment: One reference added. Final version to appear in JHE

    Methods for biogeochemical studies of sea ice: The state of the art, caveats, and recommendations

    Get PDF
    AbstractOver the past two decades, with recognition that the ocean’s sea-ice cover is neither insensitive to climate change nor a barrier to light and matter, research in sea-ice biogeochemistry has accelerated significantly, bringing together a multi-disciplinary community from a variety of fields. This disciplinary diversity has contributed a wide range of methodological techniques and approaches to sea-ice studies, complicating comparisons of the results and the development of conceptual and numerical models to describe the important biogeochemical processes occurring in sea ice. Almost all chemical elements, compounds, and biogeochemical processes relevant to Earth system science are measured in sea ice, with published methods available for determining biomass, pigments, net community production, primary production, bacterial activity, macronutrients, numerous natural and anthropogenic organic compounds, trace elements, reactive and inert gases, sulfur species, the carbon dioxide system parameters, stable isotopes, and water-ice-atmosphere fluxes of gases, liquids, and solids. For most of these measurements, multiple sampling and processing techniques are available, but to date there has been little intercomparison or intercalibration between methods. In addition, researchers collect different types of ancillary data and document their samples differently, further confounding comparisons between studies. These problems are compounded by the heterogeneity of sea ice, in which even adjacent cores can have dramatically different biogeochemical compositions. We recommend that, in future investigations, researchers design their programs based on nested sampling patterns, collect a core suite of ancillary measurements, and employ a standard approach for sample identification and documentation. In addition, intercalibration exercises are most critically needed for measurements of biomass, primary production, nutrients, dissolved and particulate organic matter (including exopolymers), the CO2 system, air-ice gas fluxes, and aerosol production. We also encourage the development of in situ probes robust enough for long-term deployment in sea ice, particularly for biological parameters, the CO2 system, and other gases.This manuscript is a product of SCOR working group 140 on Biogeochemical Exchange Processes at Sea-Ice Interfaces (BEPSII); we thank BEPSII chairs Jacqueline Stefels and Nadja Steiner and SCOR executive director Ed Urban for their practical and moral support of this endeavour. This manuscript was first conceived at an EU COST Action 735 workshop held in Amsterdam in April 2011; in addition to COST 735, we thank the other participants of the “methods” break-out group at that meeting, namely Gerhard Dieckmann, Christoph Garbe, and Claire Hughes. Our editors, Steve Ackley and Jody Deming, and our reviewers, Mats Granskog and two anonymous reviewers, provided invaluable advice that not only identified and helped fill in some gaps, but also suggested additional ways to make what is by nature a rather dry subject (methods) at least a bit more interesting and accessible. We also thank the librarians at the Institute of Ocean Sciences for their unflagging efforts to track down the more obscure references we required. Finally, and most importantly, we thank everyone who has braved the unknown and made the new measurements that have helped build sea-ice biogeochemistry into the robust and exciting field it has become.This is the final published article, originally published in Elementa: Science of the Anthropocene, 3: 000038, doi: 10.12952/journal.elementa.00003

    Can a “state of the art” chemistry transport model simulate Amazonian tropospheric chemistry?

    Get PDF
    We present an evaluation of a nested high-resolution Goddard Earth Observing System (GEOS)-Chem chemistry transport model simulation of tropospheric chemistry over tropical South America. The model has been constrained with two isoprene emission inventories: (1) the canopy-scale Model of Emissions of Gases and Aerosols from Nature (MEGAN) and (2) a leaf-scale algorithm coupled to the Lund-Potsdam-Jena General Ecosystem Simulator (LPJ-GUESS) dynamic vegetation model, and the model has been run using two different chemical mechanisms that contain alternative treatments of isoprene photo-oxidation. Large differences of up to 100 Tg C yr^(−1) exist between the isoprene emissions predicted by each inventory, with MEGAN emissions generally higher. Based on our simulations we estimate that tropical South America (30–85°W, 14°N–25°S) contributes about 15–35% of total global isoprene emissions. We have quantified the model sensitivity to changes in isoprene emissions, chemistry, boundary layer mixing, and soil NO_x emissions using ground-based and airborne observations. We find GEOS-Chem has difficulty reproducing several observed chemical species; typically hydroxyl concentrations are underestimated, whilst mixing ratios of isoprene and its oxidation products are overestimated. The magnitude of model formaldehyde (HCHO) columns are most sensitive to the choice of chemical mechanism and isoprene emission inventory. We find GEOS-Chem exhibits a significant positive bias (10–100%) when compared with HCHO columns from the Scanning Imaging Absorption Spectrometer for Atmospheric Chartography (SCIAMACHY) and Ozone Monitoring Instrument (OMI) for the study year 2006. Simulations that use the more detailed chemical mechanism and/or lowest isoprene emissions provide the best agreement to the satellite data, since they result in lower-HCHO columns

    Electron Cloud Effects in the CERN SPS and LHC

    Get PDF
    Electron cloud effects have been recently observed in the CERN SPS in the presence of LHC type proton beams with 25 ns bunch spacing. Above a threshold intensity of about 4 X 10^12 protons in 81 consecutive bunches, corresponding to half of the nomina
    corecore