66 research outputs found

    A Method for Measuring Desquamation and its Use for Assessing the Effects of Some Common Exfoliants

    Get PDF
    Desquamation has been measured in the past by a counting chamber technique after corneocytes are removed from the skin surface and disaggregated in a dilute surfactant solution. However, we have found that complete corneocyte disaggregation is not always possible when these aggregates are recovered from sites where patent peeling is induced. Corneocyte counting in such instances is difficult or impossible. We have devised a method of measuring desquamation wherein the desquamating cells are determined as the total alkali-soluble protein after they are removed from the skin surface with an inert, self-hardening gel. Highly reproducible desquamation rates are obtained, characteristic of the individual subject. Using some common exfoliants, we show that pharmacologic response, observed as an increase in desquamation rate, is also an individual characteristic

    Deconvoluting Post-Transplant Immunity: Cell Subset-Specific Mapping Reveals Pathways for Activation and Expansion of Memory T, Monocytes and B Cells

    Get PDF
    A major challenge for the field of transplantation is the lack of understanding of genomic and molecular drivers of early post-transplant immunity. The early immune response creates a complex milieu that determines the course of ensuing immune events and the ultimate outcome of the transplant. The objective of the current study was to mechanistically deconvolute the early immune response by purifying and profiling the constituent cell subsets of the peripheral blood. We employed genome-wide profiling of whole blood and purified CD4, CD8, B cells and monocytes in tandem with high-throughput laser-scanning cytometry in 10 kidney transplants sampled serially pre-transplant, 1, 2, 4, 8 and 12 weeks. Cytometry confirmed early cell subset depletion by antibody induction and immunosuppression. Multiple markers revealed the activation and proliferative expansion of CD45RO+CD62L− effector memory CD4/CD8 T cells as well as progressive activation of monocytes and B cells. Next, we mechanistically deconvoluted early post-transplant immunity by serial monitoring of whole blood using DNA microarrays. Parallel analysis of cell subset-specific gene expression revealed a unique spectrum of time-dependent changes and functional pathways. Gene expression profiling results were validated with 157 different probesets matching all 65 antigens detected by cytometry. Thus, serial blood cell monitoring reflects the profound changes in blood cell composition and immune activation early post-transplant. Each cell subset reveals distinct pathways and functional programs. These changes illuminate a complex, early phase of immunity and inflammation that includes activation and proliferative expansion of the memory effector and regulatory cells that may determine the phenotype and outcome of the kidney transplant

    LSST: from Science Drivers to Reference Design and Anticipated Data Products

    Get PDF
    (Abridged) We describe here the most ambitious survey currently planned in the optical, the Large Synoptic Survey Telescope (LSST). A vast array of science will be enabled by a single wide-deep-fast sky survey, and LSST will have unique survey capability in the faint time domain. The LSST design is driven by four main science themes: probing dark energy and dark matter, taking an inventory of the Solar System, exploring the transient optical sky, and mapping the Milky Way. LSST will be a wide-field ground-based system sited at Cerro Pach\'{o}n in northern Chile. The telescope will have an 8.4 m (6.5 m effective) primary mirror, a 9.6 deg2^2 field of view, and a 3.2 Gigapixel camera. The standard observing sequence will consist of pairs of 15-second exposures in a given field, with two such visits in each pointing in a given night. With these repeats, the LSST system is capable of imaging about 10,000 square degrees of sky in a single filter in three nights. The typical 5σ\sigma point-source depth in a single visit in rr will be 24.5\sim 24.5 (AB). The project is in the construction phase and will begin regular survey operations by 2022. The survey area will be contained within 30,000 deg2^2 with δ<+34.5\delta<+34.5^\circ, and will be imaged multiple times in six bands, ugrizyugrizy, covering the wavelength range 320--1050 nm. About 90\% of the observing time will be devoted to a deep-wide-fast survey mode which will uniformly observe a 18,000 deg2^2 region about 800 times (summed over all six bands) during the anticipated 10 years of operations, and yield a coadded map to r27.5r\sim27.5. The remaining 10\% of the observing time will be allocated to projects such as a Very Deep and Fast time domain survey. The goal is to make LSST data products, including a relational database of about 32 trillion observations of 40 billion objects, available to the public and scientists around the world.Comment: 57 pages, 32 color figures, version with high-resolution figures available from https://www.lsst.org/overvie

    Shattered pellet injection experiments at JET in support of the ITER disruption mitigation system design

    Get PDF
    A series of experiments have been executed at JET to assess the efficacy of the newly installed shattered pellet injection (SPI) system in mitigating the effects of disruptions. Issues, important for the ITER disruption mitigation system, such as thermal load mitigation, avoidance of runaway electron (RE) formation, radiation asymmetries during thermal quench mitigation, electromagnetic load control and RE energy dissipation have been addressed over a large parameter range. The efficiency of the mitigation has been examined for the various SPI injection strategies. The paper summarises the results from these JET SPI experiments and discusses their implications for the ITER disruption mitigation scheme

    Disruption prediction at JET through deep convolutional neural networks using spatiotemporal information from plasma profiles

    Get PDF
    In view of the future high power nuclear fusion experiments, the early identification of disruptions is a mandatory requirement, and presently the main goal is moving from the disruption mitigation to disruption avoidance and control. In this work, a deep-convolutional neural network (CNN) is proposed to provide early detection of disruptive events at JET. The CNN ability to learn relevant features, avoiding hand-engineered feature extraction, has been exploited to extract the spatiotemporal information from 1D plasma profiles. The model is trained with regularly terminated discharges and automatically selected disruptive phase of disruptions, coming from the recent ITER-like-wall experiments. The prediction performance is evaluated using a set of discharges representative of different operating scenarios, and an in-depth analysis is made to evaluate the performance evolution with respect to the considered experimental conditions. Finally, as real-time triggers and termination schemes are being developed at JET, the proposed model has been tested on a set of recent experiments dedicated to plasma termination for disruption avoidance and mitigation. The CNN model demonstrates very high performance, and the exploitation of 1D plasma profiles as model input allows us to understand the underlying physical phenomena behind the predictor decision

    Overview of JET results for optimising ITER operation

    Get PDF
    The JET 2019–2020 scientific and technological programme exploited the results of years of concerted scientific and engineering work, including the ITER-like wall (ILW: Be wall and W divertor) installed in 2010, improved diagnostic capabilities now fully available, a major neutral beam injection upgrade providing record power in 2019–2020, and tested the technical and procedural preparation for safe operation with tritium. Research along three complementary axes yielded a wealth of new results. Firstly, the JET plasma programme delivered scenarios suitable for high fusion power and alpha particle (α) physics in the coming D–T campaign (DTE2), with record sustained neutron rates, as well as plasmas for clarifying the impact of isotope mass on plasma core, edge and plasma-wall interactions, and for ITER pre-fusion power operation. The efficacy of the newly installed shattered pellet injector for mitigating disruption forces and runaway electrons was demonstrated. Secondly, research on the consequences of long-term exposure to JET-ILW plasma was completed, with emphasis on wall damage and fuel retention, and with analyses of wall materials and dust particles that will help validate assumptions and codes for design and operation of ITER and DEMO. Thirdly, the nuclear technology programme aiming to deliver maximum technological return from operations in D, T and D–T benefited from the highest D–D neutron yield in years, securing results for validating radiation transport and activation codes, and nuclear data for ITER

    New H-mode regimes with small ELMs and high thermal confinement in the Joint European Torus

    Get PDF
    New H-mode regimes with high confinement, low core impurity accumulation, and small edge-localized mode perturbations have been obtained in magnetically confined plasmas at the Joint European Torus tokamak. Such regimes are achieved by means of optimized particle fueling conditions at high input power, current, and magnetic field, which lead to a self-organized state with a strong increase in rotation and ion temperature and a decrease in the edge density. An interplay between core and edge plasma regions leads to reduced turbulence levels and outward impurity convection. These results pave the way to an attractive alternative to the standard plasmas considered for fusion energy generation in a tokamak with a metallic wall environment such as the ones expected in ITER.&amp; nbsp;Published under an exclusive license by AIP Publishing

    The role of ETG modes in JET-ILW pedestals with varying levels of power and fuelling

    Get PDF
    We present the results of GENE gyrokinetic calculations based on a series of JET-ITER-like-wall (ILW) type I ELMy H-mode discharges operating with similar experimental inputs but at different levels of power and gas fuelling. We show that turbulence due to electron-temperature-gradient (ETGs) modes produces a significant amount of heat flux in four JET-ILW discharges, and, when combined with neoclassical simulations, is able to reproduce the experimental heat flux for the two low gas pulses. The simulations plausibly reproduce the high-gas heat fluxes as well, although power balance analysis is complicated by short ELM cycles. By independently varying the normalised temperature gradients (omega(T)(e)) and normalised density gradients (omega(ne )) around their experimental values, we demonstrate that it is the ratio of these two quantities eta(e) = omega(Te)/omega(ne) that determines the location of the peak in the ETG growth rate and heat flux spectra. The heat flux increases rapidly as eta(e) increases above the experimental point, suggesting that ETGs limit the temperature gradient in these pulses. When quantities are normalised using the minor radius, only increases in omega(Te) produce appreciable increases in the ETG growth rates, as well as the largest increases in turbulent heat flux which follow scalings similar to that of critical balance theory. However, when the heat flux is normalised to the electron gyro-Bohm heat flux using the temperature gradient scale length L-Te, it follows a linear trend in correspondence with previous work by different authors

    Performance Comparison of Machine Learning Disruption Predictors at JET

    Get PDF
    Reliable disruption prediction (DP) and disruption mitigation systems are considered unavoidable during international thermonuclear experimental reactor (ITER) operations and in the view of the next fusion reactors such as the DEMOnstration Power Plant (DEMO) and China Fusion Engineering Test Reactor (CFETR). In the last two decades, a great number of DP systems have been developed using data-driven methods. The performance of the DP models has been improved over the years both for a more appropriate choice of diagnostics and input features and for the availability of increasingly powerful data-driven modelling techniques. However, a direct comparison among the proposals has not yet been conducted. Such a comparison is mandatory, at least for the same device, to learn lessons from all these efforts and finally choose the best set of diagnostic signals and the best modelling approach. A first effort towards this goal is made in this paper, where different DP models will be compared using the same performance indices and the same device. In particular, the performance of a conventional Multilayer Perceptron Neural Network (MLP-NN) model is compared with those of two more sophisticated models, based on Generative Topographic Mapping (GTM) and Convolutional Neural Networks (CNN), on the same real time diagnostic signals from several experiments at the JET tokamak. The most common performance indices have been used to compare the different DP models and the results are deeply discussed. The comparison confirms the soundness of all the investigated machine learning approaches and the chosen diagnostics, enables us to highlight the pros and cons of each model, and helps to consciously choose the approach that best matches with the plasma protection needs
    corecore