100 research outputs found
A Multilaboratory Comparison of Calibration Accuracy and the Performance of External References in Analytical Ultracentrifugation
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies
A multilaboratory comparison of calibration accuracy and the performance of external references in analytical ultracentrifugation.
Analytical ultracentrifugation (AUC) is a first principles based method to determine absolute sedimentation coefficients and buoyant molar masses of macromolecules and their complexes, reporting on their size and shape in free solution. The purpose of this multi-laboratory study was to establish the precision and accuracy of basic data dimensions in AUC and validate previously proposed calibration techniques. Three kits of AUC cell assemblies containing radial and temperature calibration tools and a bovine serum albumin (BSA) reference sample were shared among 67 laboratories, generating 129 comprehensive data sets. These allowed for an assessment of many parameters of instrument performance, including accuracy of the reported scan time after the start of centrifugation, the accuracy of the temperature calibration, and the accuracy of the radial magnification. The range of sedimentation coefficients obtained for BSA monomer in different instruments and using different optical systems was from 3.655 S to 4.949 S, with a mean and standard deviation of (4.304 ± 0.188) S (4.4%). After the combined application of correction factors derived from the external calibration references for elapsed time, scan velocity, temperature, and radial magnification, the range of s-values was reduced 7-fold with a mean of 4.325 S and a 6-fold reduced standard deviation of ± 0.030 S (0.7%). In addition, the large data set provided an opportunity to determine the instrument-to-instrument variation of the absolute radial positions reported in the scan files, the precision of photometric or refractometric signal magnitudes, and the precision of the calculated apparent molar mass of BSA monomer and the fraction of BSA dimers. These results highlight the necessity and effectiveness of independent calibration of basic AUC data dimensions for reliable quantitative studies
Development of Risk Prediction Equations for Incident Chronic Kidney Disease
IMPORTANCE ‐ Early identification of individuals at elevated risk of developing chronic kidney disease
could improve clinical care through enhanced surveillance and better management of underlying health
conditions.
OBJECTIVE – To develop assessment tools to identify individuals at increased risk of chronic kidney
disease, defined by reduced estimated glomerular filtration rate (eGFR).
DESIGN, SETTING, AND PARTICIPANTS – Individual level data analysis of 34 multinational cohorts from
the CKD Prognosis Consortium including 5,222,711 individuals from 28 countries. Data were collected from April, 1970 through January, 2017. A two‐stage analysis was performed, with each study first
analyzed individually and summarized overall using a weighted average. Since clinical variables were often differentially available by diabetes status, models were developed separately within participants
with diabetes and without diabetes. Discrimination and calibration were also tested in 9 external
cohorts (N=2,253,540).
EXPOSURE Demographic and clinical factors.
MAIN OUTCOMES AND MEASURES – Incident eGFR <60 ml/min/1.73 m2.
RESULTS – In 4,441,084 participants without diabetes (mean age, 54 years, 38% female), there were
660,856 incident cases of reduced eGFR during a mean follow‐up of 4.2 years. In 781,627 participants
with diabetes (mean age, 62 years, 13% female), there were 313,646 incident cases during a mean
follow‐up of 3.9 years. Equations for the 5‐year risk of reduced eGFR included age, sex, ethnicity, eGFR,
history of cardiovascular disease, ever smoker, hypertension, BMI, and albuminuria. For participants
with diabetes, the models also included diabetes medications, hemoglobin A1c, and the interaction
between the two. The risk equations had a median C statistic for the 5‐year predicted probability of
0.845 (25th – 75th percentile, 0.789‐0.890) in the cohorts without diabetes and 0.801 (25th – 75th
percentile, 0.750‐0.819) in the cohorts with diabetes. Calibration analysis showed that 9 out of 13 (69%)
study populations had a slope of observed to predicted risk between 0.80 and 1.25. Discrimination was
similar in 18 study populations in 9 external validation cohorts; calibration showed that 16 out of 18
(89%) had a slope of observed to predicted risk between 0.80 and 1.25.
CONCLUSIONS AND RELEVANCE – Equations for predicting risk of incident chronic kidney disease
developed in over 5 million people from 34 multinational cohorts demonstrated high discrimination and
variable calibration in diverse populations
Measurement of inclusive jet and dijet cross-sections in proton-proton collisions at s √ =13 TeV with the ATLAS detector
Inclusive jet and dijet cross-sections are measured in proton-proton collisions at a centre-of-mass energy of 13 TeV. The measurement uses a dataset with an integrated luminosity of 3.2 fb−1 recorded in 2015 with the ATLAS detector at the Large Hadron Collider. Jets are identified using the anti-kt algorithm with a radius parameter value of R = 0.4. The inclusive jet cross-sections are measured double-differentially as a function of the jet transverse momentum, covering the range from 100 GeV to 3.5 TeV, and the absolute jet rapidity up to |y| = 3. The double-differential dijet production cross-sections are presented as a function of the dijet mass, covering the range from 300 GeV to 9 TeV, and the half absolute rapidity separation between the two leading jets within |y| < 3, y∗, up to y∗ = 3. Next-to-leading-order, and next-to-next-to-leading-order for the inclusive jet measurement, perturbative QCD calculations corrected for non-perturbative and electroweak effects are compared to the measured cross-sections
A control oriented strategy of disruption prediction to avoid the configuration collapse of tokamak reactors
The objective of thermonuclear fusion consists of producing electricity from the coalescence of light nuclei in high temperature plasmas. The most promising route to fusion envisages the confinement of such plasmas with magnetic fields, whose most studied configuration is the tokamak. Disruptions are catastrophic collapses affecting all tokamak devices and one of the main potential showstoppers on the route to a commercial reactor. In this work we report how, deploying innovative analysis methods on thousands of JET experiments covering the isotopic compositions from hydrogen to full tritium and including the major D-T campaign, the nature of the various forms of collapse is investigated in all phases of the discharges. An original approach to proximity detection has been developed, which allows determining both the probability of and the time interval remaining before an incoming disruption, with adaptive, from scratch, real time compatible techniques. The results indicate that physics based prediction and control tools can be developed, to deploy realistic strategies of disruption avoidance and prevention, meeting the requirements of the next generation of devices.Confining plasma and managing disruptions in tokamak devices is a challenge. Here the authors demonstrate a method predicting and possibly preventing disruptions and macroscopic instabilities in tokamak plasma using data from JET
Overview of JET results for optimising ITER operation
The JET 2019–2020 scientific and technological programme exploited the results of years of concerted scientific and engineering work, including the ITER-like wall (ILW: Be wall and W divertor) installed in 2010, improved diagnostic capabilities now fully available, a major neutral beam injection upgrade providing record power in 2019–2020, and tested the technical and procedural preparation for safe operation with tritium. Research along three complementary axes yielded a wealth of new results. Firstly, the JET plasma programme delivered scenarios suitable for high fusion power and alpha particle (α) physics in the coming D–T campaign (DTE2), with record sustained neutron rates, as well as plasmas for clarifying the impact of isotope mass on plasma core, edge and plasma-wall interactions, and for ITER pre-fusion power operation. The efficacy of the newly installed shattered pellet injector for mitigating disruption forces and runaway electrons was demonstrated. Secondly, research on the consequences of long-term exposure to JET-ILW plasma was completed, with emphasis on wall damage and fuel retention, and with analyses of wall materials and dust particles that will help validate assumptions and codes for design and operation of ITER and DEMO. Thirdly, the nuclear technology programme aiming to deliver maximum technological return from operations in D, T and D–T benefited from the highest D–D neutron yield in years, securing results for validating radiation transport and activation codes, and nuclear data for ITER
The role of ETG modes in JET-ILW pedestals with varying levels of power and fuelling
We present the results of GENE gyrokinetic calculations based on a series of JET-ITER-like-wall (ILW) type I ELMy H-mode discharges operating with similar experimental inputs but at different levels of power and gas fuelling. We show that turbulence due to electron-temperature-gradient (ETGs) modes produces a significant amount of heat flux in four JET-ILW discharges, and, when combined with neoclassical simulations, is able to reproduce the experimental heat flux for the two low gas pulses. The simulations plausibly reproduce the high-gas heat fluxes as well, although power balance analysis is complicated by short ELM cycles. By independently varying the normalised temperature gradients (omega(T)(e)) and normalised density gradients (omega(ne )) around their experimental values, we demonstrate that it is the ratio of these two quantities eta(e) = omega(Te)/omega(ne) that determines the location of the peak in the ETG growth rate and heat flux spectra. The heat flux increases rapidly as eta(e) increases above the experimental point, suggesting that ETGs limit the temperature gradient in these pulses. When quantities are normalised using the minor radius, only increases in omega(Te) produce appreciable increases in the ETG growth rates, as well as the largest increases in turbulent heat flux which follow scalings similar to that of critical balance theory. However, when the heat flux is normalised to the electron gyro-Bohm heat flux using the temperature gradient scale length L-Te, it follows a linear trend in correspondence with previous work by different authors
Spectroscopic camera analysis of the roles of molecularly assisted reaction chains during detachment in JET L-mode plasmas
The roles of the molecularly assisted ionization (MAI), recombination (MAR) and dissociation (MAD) reaction chains with respect to the purely atomic ionization and recombination processes were studied experimentally during detachment in low-confinement mode (L-mode) plasmas in JET with the help of experimentally inferred divertor plasma and neutral conditions, extracted previously from filtered camera observations of deuterium Balmer emission, and the reaction coefficients provided by the ADAS, AMJUEL and H2VIBR atomic and molecular databases. The direct contribution of MAI and MAR in the outer divertor particle balance was found to be inferior to the electron-atom ionization (EAI) and electron-ion recombination (EIR). Near the outer strike point, a strong atom source due to the D+2-driven MAD was, however, observed to correlate with the onset of detachment at outer strike point temperatures of Te,osp = 0.9-2.0 eV via increased plasma-neutral interactions before the increasing dominance of EIR at Te,osp < 0.9 eV, followed by increasing degree of detachment. The analysis was supported by predictions from EDGE2D-EIRENE simulations which were in qualitative agreement with the experimental observations
A control oriented strategy of disruption prediction to avoid the configuration collapse of tokamak reactors
The objective of thermonuclear fusion consists of producing electricity from the coalescence of light nuclei in high temperature plasmas. The most promising route to fusion envisages the confinement of such plasmas with magnetic fields, whose most studied configuration is the tokamak. Disruptions are catastrophic collapses affecting all tokamak devices and one of the main potential showstoppers on the route to a commercial reactor. In this work we report how, deploying innovative analysis methods on thousands of JET experiments covering the isotopic compositions from hydrogen to full tritium and including the major D-T campaign, the nature of the various forms of collapse is investigated in all phases of the discharges. An original approach to proximity detection has been developed, which allows determining both the probability of and the time interval remaining before an incoming disruption, with adaptive, from scratch, real time compatible techniques. The results indicate that physics based prediction and control tools can be developed, to deploy realistic strategies of disruption avoidance and prevention, meeting the requirements of the next generation of devices
- …