28 research outputs found

    Integrated plasma proteomic and single-cell immune signaling network signatures demarcate mild, moderate, and severe COVID-19

    Get PDF
    The biological determinants underlying the range of coronavirus 2019 (COVID-19) clinical manifestations are not fully understood. Here, over 1,400 plasma proteins and 2,600 single-cell immune features comprising cell phenotype, endogenous signaling activity, and signaling responses to inflammatory ligands are cross-sectionally assessed in peripheral blood from 97 patients with mild, moderate, and severe COVID-19 and 40 uninfected patients. Using an integrated computational approach to analyze the combined plasma and single-cell proteomic data, we identify and independently validate a multi-variate model classifying COVID-19 severity (multi-class area under the curve [AUC]training = 0.799, p = 4.2e-6; multi-class AUCvalidation = 0.773, p = 7.7e-6). Examination of informative model features reveals biological signatures of COVID-19 severity, including the dysregulation of JAK/STAT, MAPK/mTOR, and nuclear factor ÎşB (NF-ÎşB) immune signaling networks in addition to recapitulating known hallmarks of COVID-19. These results provide a set of early determinants of COVID-19 severity that may point to therapeutic targets for prevention and/or treatment of COVID-19 progression

    Shedding Light on the Galaxy Luminosity Function

    Full text link
    From as early as the 1930s, astronomers have tried to quantify the statistical nature of the evolution and large-scale structure of galaxies by studying their luminosity distribution as a function of redshift - known as the galaxy luminosity function (LF). Accurately constructing the LF remains a popular and yet tricky pursuit in modern observational cosmology where the presence of observational selection effects due to e.g. detection thresholds in apparent magnitude, colour, surface brightness or some combination thereof can render any given galaxy survey incomplete and thus introduce bias into the LF. Over the last seventy years there have been numerous sophisticated statistical approaches devised to tackle these issues; all have advantages -- but not one is perfect. This review takes a broad historical look at the key statistical tools that have been developed over this period, discussing their relative merits and highlighting any significant extensions and modifications. In addition, the more generalised methods that have emerged within the last few years are examined. These methods propose a more rigorous statistical framework within which to determine the LF compared to some of the more traditional methods. I also look at how photometric redshift estimations are being incorporated into the LF methodology as well as considering the construction of bivariate LFs. Finally, I review the ongoing development of completeness estimators which test some of the fundamental assumptions going into LF estimators and can be powerful probes of any residual systematic effects inherent magnitude-redshift data.Comment: 95 pages, 23 figures, 3 tables. Now published in The Astronomy & Astrophysics Review. This version: bring in line with A&AR format requirements, also minor typo corrections made, additional citations and higher rez images adde

    Lawson criterion for ignition exceeded in an inertial fusion experiment

    Get PDF
    For more than half a century, researchers around the world have been engaged in attempts to achieve fusion ignition as a proof of principle of various fusion concepts. Following the Lawson criterion, an ignited plasma is one where the fusion heating power is high enough to overcome all the physical processes that cool the fusion plasma, creating a positive thermodynamic feedback loop with rapidly increasing temperature. In inertially confined fusion, ignition is a state where the fusion plasma can begin "burn propagation" into surrounding cold fuel, enabling the possibility of high energy gain. While "scientific breakeven" (i.e., unity target gain) has not yet been achieved (here target gain is 0.72, 1.37 MJ of fusion for 1.92 MJ of laser energy), this Letter reports the first controlled fusion experiment, using laser indirect drive, on the National Ignition Facility to produce capsule gain (here 5.8) and reach ignition by nine different formulations of the Lawson criterion

    Importance of interlayer H bonding structure to the stability of layered minerals

    Get PDF
    Layered (oxy) hydroxide minerals often possess out-of-plane hydrogen atoms that form hydrogen bonding networks which stabilize the layered structure. However, less is known about how the ordering of these bonds affects the structural stability and solubility of these minerals. Here, we report a new strategy that uses the focused electron beam to probe the effect of differences in hydrogen bonding networks on mineral solubility. In this regard, the dissolution behavior of boehmite (Îł-AlOOH) and gibbsite (Îł-Al(OH)3) were compared and contrasted in real time via liquid cell electron microscopy. Under identical such conditions, 2D-nanosheets of boehmite (Îł-AlOOH) exfoliated from the bulk and then rapidly dissolved, whereas gibbsite was stable. Further, substitution of only 1% Fe(III) for Al(III) in the structure of boehmite inhibited delamination and dissolution. Factors such as pH, radiolytic species, and knock on damage were systematically studied and eliminated as proximal causes for boehmite dissolution. Instead, the creation of electron/hole pairs was considered to be the mechanism that drove dissolution. The widely disparate behaviors of boehmite, gibbsite, and Fe-doped boehmite are discussed in the context of differences in the OH bond strengths, hydrogen bonding networks, and the presence or absence of electron/hole recombination centers

    Outcomes of the JNT 1955 Phase I Viability Study of Gamma Emission Tomography for Spent Fuel Verification

    No full text
    The potential for gamma emission tomography (GET) to detect partial defects within a spent nuclear fuel assembly has been assessed within the IAEA Support Program project JNT 1955, phase I, which was completed and reported to the IAEA in October 2016. Two safeguards verification objectives were identified in the project; (1) independent determination of the number of active pins that are present in a measured assembly, in the absence of a priori information about the assembly, and; (2) quantitative assessment of pin-by-pin properties, for example the activity of key isotopes or pin attributes such as cooling time and relative burnup, under the assumption that basic fuel parameters (e.g., assembly type and nominal fuel composition) are known. The efficacy of GET to meet these two verification objectives was evaluated across a range of fuel types, burnups and cooling times, while targeting a total interrogation time of less than 60 minutes. The evaluations were founded on a modelling and analysis framework applied to existing and emerging GET instrument designs. Monte Carlo models of different fuel types were used to produce simulated tomographer responses to large populations of “virtual” fuel assemblies. The simulated instrument response data were then processed using a variety of tomographic-reconstruction and image-processing methods, and scoring metrics were defined and used to evaluate the performance of the methods. This paper describes the analysis framework and metrics used to predict tomographer performance. It also presents the design of a “universal” GET (UGET) instrument intended to support the full range of verification scenarios envisioned by the IAEA. Finally, it gives examples of the expected partial-defect detection capabilities for some fuels and diversion scenarios, and it provides a comparison of predicted performance for the notional UGET design and an optimized variant of an existing IAEA instrument
    corecore