2,997 research outputs found

    A Predictive Algorithm For Wetlands In Deep Time Paleoclimate Models

    Get PDF
    Methane is a powerful greenhouse gas produced in wetland environments via microbial action in anaerobic conditions. If the location and extent of wetlands are unknown, such as for the Earth many millions of years in the past, a model of wetland fraction is required in order to calculate methane emissions and thus help reduce uncertainty in the understanding of past warm greenhouse climates. Here we present an algorithm for predicting inundated wetland fraction for use in calculating wetland methane emission fluxes in deep time paleoclimate simulations. The algorithm determines, for each grid cell in a given paleoclimate simulation, the wetland fraction predicted by a nearest neighbours search of modern day data in a space described by a set of environmental, climate and vegetation variables. To explore this approach, we first test it for a modern day climate with variables obtained from observations and then for an Eocene climate with variables derived from a fully coupled global climate model (HadCM3BL-M2.2). Two independent dynamic vegetation models were used to provide two sets of equivalent vegetation variables which yielded two different wetland predictions. As a first test the method, using both vegetation models, satisfactorily reproduces modern data wetland fraction at a course grid resolution, similar to those used in paleoclimate simulations. We then applied the method to an early Eocene climate, testing its outputs against the locations of Eocene coal deposits. We predict global mean monthly wetland fraction area for the early Eocene of 8 to 10 × 106km2 with corresponding total annual methane flux of 656 to 909 Tg, depending on which of two different dynamic global vegetation models are used to model wetland fraction and methane emission rates. Both values are significantly higher than estimates for the modern-day of 4 × 106km2 and around 190Tg (Poulter et. al. 2017, Melton et. al., 2013

    Higher Order, Hybrid BEM/FEM Methods Applied to Antenna Modeling

    Get PDF
    In this presentation, the authors address topics relevant to higher order modeling using hybrid BEM/FEM formulations. The first of these is the limitation on convergence rates imposed by geometric modeling errors in the analysis of scattering by a dielectric sphere. The second topic is the application of an Incomplete LU Threshold (ILUT) preconditioner to solve the linear system resulting from the BEM/FEM formulation. The final tOpic is the application of the higher order BEM/FEM formulation to antenna modeling problems. The authors have previously presented work on the benefits of higher order modeling. To achieve these benefits, special attention is required in the integration of singular and near-singular terms arising in the surface integral equation. Several methods for handling these terms have been presented. It is also well known that achieving ~he high rates of convergence afforded by higher order bases may als'o require the employment of higher order geometry models. A number of publications have described the use of quadratic elements to model curved surfaces. The authors have shown in an EFIE formulation, applied to scattering by a PEC .sphere, that quadratic order elements may be insufficient to prevent the domination of modeling errors. In fact, on a PEC sphere with radius r = 0.58 Lambda(sub 0), a quartic order geometry representation was required to obtain a convergence benefi.t from quadratic bases when compared to the convergence rate achieved with linear bases. Initial trials indicate that, for a dielectric sphere of the same radius, - requirements on the geometry model are not as severe as for the PEC sphere. The authors will present convergence results for higher order bases as a function of the geometry model order in the hybrid BEM/FEM formulation applied to dielectric spheres. It is well known that the system matrix resulting from the hybrid BEM/FEM formulation is ill -conditioned. For many real applications, a good preconditioner is required to obtain usable convergence from an iterative solver. The authors have examined the use of an Incomplete LU Threshold (ILUT) preconditioner . to solver linear systems stemming from higher order BEM/FEM formulations in 2D scattering problems. Although the resulting preconditioner provided aD excellent approximation to the system inverse, its size in terms of non-zero entries represented only a modest improvement when compared with the fill-in associated with a sparse direct solver. Furthermore, the fill-in of the preconditioner could not be substantially reduced without the occurrence of instabilities. In addition to the results for these 2D problems, the authors will present iterative solution data from the application of the ILUT preconditioner to 3D problems

    Issues and Methods Concerning the Evaluation of Hypersingular and Near-Hypersingular Integrals in BEM Formulations

    Get PDF
    It is known that higher order modeling of the sources and the geometry in Boundary Element Modeling (BEM) formulations is essential to highly efficient computational electromagnetics. However, in order to achieve the benefits of hIgher order basis and geometry modeling, the singular and near-singular terms arising in BEM formulations must be integrated accurately. In particular, the accurate integration of near-singular terms, which occur when observation points are near but not on source regions of the scattering object, has been considered one of the remaining limitations on the computational efficiency of integral equation methods. The method of singularity subtraction has been used extensively for the evaluation of singular and near-singular terms. Piecewise integration of the source terms in this manner, while manageable for bases of constant and linear orders, becomes unwieldy and prone to error for bases of higher order. Furthermore, we find that the singularity subtraction method is not conducive to object-oriented programming practices, particularly in the context of multiple operators. To extend the capabilities, accuracy, and maintainability of general-purpose codes, the subtraction method is being replaced in favor of the purely numerical quadrature schemes. These schemes employ singularity cancellation methods in which a change of variables is chosen such that the Jacobian of the transformation cancels the singularity. An example of the sin,oularity cancellation approach is the Duffy method, which has two major drawbacks: 1) In the resulting integrand, it produces an angular variation about the singular point that becomes nearly-singular for observation points close to an edge of the parent element, and 2) it appears not to work well when applied to nearly-singular integrals. Recently, the authors have introduced the transformation u(x(prime))= sinh (exp -1) x(prime)/Square root of ((y prime (exp 2))+ z(exp 2) for integrating functions of the form I = Integral of (lambda(r(prime))((e(exp -jkR))/(4 pi R) d D where A (r (prime)) is a vector or scalar basis function and R = Square root of( (x(prime)(exp2) + (y(prime)(exp2) + z(exp 2)) is the distance between source and observation points. This scheme has all of the advantages of the Duffy method while avoiding the disadvantages listed above. In this presentation we will survey similar approaches for handling singular and near-singular terms for kernels with 1/R(exp 2) type behavior, addressing potential pitfalls and offering techniques to efficiently handle special cases

    Effects of High Charge Densities in Multi-GEM Detectors

    Full text link
    A comprehensive study, supported by systematic measurements and numerical computations, of the intrinsic limits of multi-GEM detectors when exposed to very high particle fluxes or operated at very large gains is presented. The observed variations of the gain, of the ion back-flow, and of the pulse height spectra are explained in terms of the effects of the spatial distribution of positive ions and their movement throughout the amplification structure. The intrinsic dynamic character of the processes involved imposes the use of a non-standard simulation tool for the interpretation of the measurements. Computations done with a Finite Element Analysis software reproduce the observed behaviour of the detector. The impact of this detailed description of the detector in extreme conditions is multiple: it clarifies some detector behaviours already observed, it helps in defining intrinsic limits of the GEM technology, and it suggests ways to extend them.Comment: 5 pages, 6 figures, 2015 IEEE Nuclear Science Symposiu

    Correcting the NLRP3 inflammasome deficiency in macrophages from autoimmune NZB mice with exon skipping antisense oligonucleotides

    Get PDF
    Inflammasomes are molecular complexes activated by infection and cellular stress, leading to caspase-1 activation and subsequent interleukin-1β (IL-1β) processing and cell death. The autoimmune NZB mouse strain does not express NLRP3, a key inflammasome initiator mediating responses to a wide variety of stimuli including endogenous danger signals, environmental irritants and a range of bacterial, fungal and viral pathogens. We have previously identified an intronic point mutation in the Nlrp3 gene from NZB mice that generates a splice acceptor site. This leads to inclusion of a pseudoexon that introduces an early termination codon and is proposed to be the cause of NLRP3 inflammasome deficiency in NZB cells. Here we have used exon skipping antisense oligonucleotides (AONs) to prevent aberrant splicing of Nlrp3 in NZB macrophages, and this restored both NLRP3 protein expression and NLRP3 inflammasome activity. Thus, the single point mutation leading to aberrant splicing is the sole cause of NLRP3 inflammasome deficiency in NZB macrophages. The NZB mouse provides a model for addressing a splicing defect in macrophages and could be used to further investigate AON design and delivery of AONs to macrophages in vivo

    Charge Transfer Properties Through Graphene Layers in Gas Detectors

    Full text link
    Graphene is a single layer of carbon atoms arranged in a honeycomb lattice with remarkable mechanical, electrical and optical properties. For the first time graphene layers suspended on copper meshes were installed into a gas detector equipped with a gaseous electron multiplier. Measurements of low energy electron and ion transfer through graphene were conducted. In this paper we describe the sample preparation for suspended graphene layers, the testing procedures and we discuss the preliminary results followed by a prospect of further applications.Comment: 2014 IEEE Nuclear Science Symposium and Medical Imaging Conference with the 21st Symposium on Room-Temperature Semiconductor X-Ray and Gamma-Ray Detectors, 4 pages, 8 figure

    Charge Transfer Properties Through Graphene for Applications in Gaseous Detectors

    Get PDF
    Graphene is a single layer of carbon atoms arranged in a honeycomb lattice with remarkable mechanical and electrical properties. Regarded as the thinnest and narrowest conductive mesh, it has drastically different transmission behaviours when bombarded with electrons and ions in vacuum. This property, if confirmed in gas, may be a definitive solution for the ion back-flow problem in gaseous detectors. In order to ascertain this aspect, graphene layers of dimensions of about 2x2cm2^2, grown on a copper substrate, are transferred onto a flat metal surface with holes, so that the graphene layer is freely suspended. The graphene and the support are installed into a gaseous detector equipped with a triple Gaseous Electron Multiplier (GEM), and the transparency properties to electrons and ions are studied in gas as a function of the electric fields. The techniques to produce the graphene samples are described, and we report on preliminary tests of graphene-coated GEMs.Comment: 4pages, 3figures, 13th Pisa Meeting on Advanced Detector

    Joining the conspiracy? Negotiating ethics and emotions in researching (around) AIDS in southern Africa

    Get PDF
    AIDS is an emotive subject, particularly in southern Africa. Among those who have been directly affected by the disease, or who perceive themselves to be personally at risk, talking about AIDS inevitably arouses strong emotions - amongst them fear, distress, loss and anger. Conventionally, human geography research has avoided engagement with such emotions. Although the ideal of the detached observer has been roundly critiqued, the emphasis in methodological literature on 'doing no harm' has led even qualitative researchers to avoid difficult emotional encounters. Nonetheless, research is inevitably shaped by emotions, not least those of the researchers themselves. In this paper, we examine the role of emotions in the research process through our experiences of researching the lives of 'Young AIDS migrants' in Malawi and Lesotho. We explore how the context of the research gave rise to the production of particular emotions, and how, in response, we shaped the research, presenting a research agenda focused more on migration than AIDS. This example reveals a tension between universalised ethics expressed through ethical research guidelines that demand informed consent, and ethics of care, sensitive to emotional context. It also demonstrates how dualistic distinctions between reason and emotion, justice and care, global and local are unhelpful in interpreting the ethics of research practice

    A Very Intense Neutrino Super Beam Experiment for Leptonic CP Violation Discovery based on the European Spallation Source Linac: A Snowmass 2013 White Paper

    Full text link
    Very intense neutrino beams and large neutrino detectors will be needed in order to enable the discovery of CP violation in the leptonic sector. We propose to use the proton linac of the European Spallation Source currently under construction in Lund, Sweden to deliver, in parallel with the spallation neutron production, a very intense, cost effective and high performance neutrino beam. The baseline program for the European Spallation Source linac is that it will be fully operational at 5 MW average power by 2022, producing 2 GeV 2.86 ms long proton pulses at a rate of 14 Hz. Our proposal is to upgrade the linac to 10 MW average power and 28 Hz, producing 14 pulses/s for neutron production and 14 pulses/s for neutrino production. Furthermore, because of the high current required in the pulsed neutrino horn, the length of the pulses used for neutrino production needs to be compressed to a few μ\mus with the aid of an accumulator ring. A long baseline experiment using this Super Beam and a megaton underground Water Cherenkov detector located in existing mines 300-600 km from Lund will make it possible to discover leptonic CP violation at 5 σ\sigma significance level in up to 50% of the leptonic Dirac CP-violating phase range. This experiment could also determine the neutrino mass hierarchy at a significance level of more than 3 σ\sigma if this issue will not already have been settled by other experiments by then. The mass hierarchy performance could be increased by combining the neutrino beam results with those obtained from atmospheric neutrinos detected by the same large volume detector. This detector will also be used to measure the proton lifetime, detect cosmological neutrinos and neutrinos from supernova explosions. Results on the sensitivity to leptonic CP violation and the neutrino mass hierarchy are presented.Comment: 28 page
    corecore