91 research outputs found

    Preprocessing Solar Images while Preserving their Latent Structure

    Get PDF
    Telescopes such as the Atmospheric Imaging Assembly aboard the Solar Dynamics Observatory, a NASA satellite, collect massive streams of high resolution images of the Sun through multiple wavelength filters. Reconstructing pixel-by-pixel thermal properties based on these images can be framed as an ill-posed inverse problem with Poisson noise, but this reconstruction is computationally expensive and there is disagreement among researchers about what regularization or prior assumptions are most appropriate. This article presents an image segmentation framework for preprocessing such images in order to reduce the data volume while preserving as much thermal information as possible for later downstream analyses. The resulting segmented images reflect thermal properties but do not depend on solving the ill-posed inverse problem. This allows users to avoid the Poisson inverse problem altogether or to tackle it on each of ∼\sim10 segments rather than on each of ∼\sim107^7 pixels, reducing computing time by a factor of ∼\sim106^6. We employ a parametric class of dissimilarities that can be expressed as cosine dissimilarity functions or Hellinger distances between nonlinearly transformed vectors of multi-passband observations in each pixel. We develop a decision theoretic framework for choosing the dissimilarity that minimizes the expected loss that arises when estimating identifiable thermal properties based on segmented images rather than on a pixel-by-pixel basis. We also examine the efficacy of different dissimilarities for recovering clusters in the underlying thermal properties. The expected losses are computed under scientifically motivated prior distributions. Two simulation studies guide our choices of dissimilarity function. We illustrate our method by segmenting images of a coronal hole observed on 26 February 2015

    Incorporating Uncertainties in Atomic Data Into the Analysis of Solar and Stellar Observations: A Case Study in Fe XIII

    Full text link
    Information about the physical properties of astrophysical objects cannot be measured directly but is inferred by interpreting spectroscopic observations in the context of atomic physics calculations. Ratios of emission lines, for example, can be used to infer the electron density of the emitting plasma. Similarly, the relative intensities of emission lines formed over a wide range of temperatures yield information on the temperature structure. A critical component of this analysis is understanding how uncertainties in the underlying atomic physics propagates to the uncertainties in the inferred plasma parameters. At present, however, atomic physics databases do not include uncertainties on the atomic parameters and there is no established methodology for using them even if they did. In this paper we develop simple models for the uncertainties in the collision strengths and decay rates for Fe XIII and apply them to the interpretation of density sensitive lines observed with the EUV Imagining spectrometer (EIS) on Hinode. We incorporate these uncertainties in a Bayesian framework. We consider both a pragmatic Bayesian method where the atomic physics information is unaffected by the observed data, and a fully Bayesian method where the data can be used to probe the physics. The former generally increases the uncertainty in the inferred density by about a factor of 5 compared with models that incorporate only statistical uncertainties. The latter reduces the uncertainties on the inferred densities, but identifies areas of possible systematic problems with either the atomic physics or the observed intensities.Comment: in press at Ap

    Constraining black hole mimickers with gravitational wave observations

    Full text link
    LIGO and Virgo have recently observed a number of gravitational wave (GW) signals that are fully consistent with being emitted by binary black holes described by general relativity. However, there are theoretical proposals of exotic objects that can be massive and compact enough to be easily confused with black holes. Nevertheless, these objects differ from black holes in having nonzero tidal deformabilities, which can allow one to distinguish binaries containing such objects from binary black holes using GW observations. Using full Bayesian parameter estimation, we investigate the possibility of constraining the parameter space of such "black hole mimickers" with upcoming GW observations. Employing perfect fluid stars with a polytropic equation of state as a simple model that can encompass a variety of possible black hole mimickers, we show how the observed masses and tidal deformabilities of a binary constrain the equation of state. We also show how such constraints can be used to rule out some simple models of boson stars.Comment: 5 + 4 pages, 4 figures; v2: small change

    Current worldwide nuclear cardiology practices and radiation exposure: results from the 65 country IAEA Nuclear Cardiology Protocols Cross-Sectional Study (INCAPS)

    Get PDF
    Aims To characterize patient radiation doses from nuclear myocardial perfusion imaging (MPI) and the use of radiation-optimizing ‘best practices' worldwide, and to evaluate the relationship between laboratory use of best practices and patient radiation dose. Methods and results We conducted an observational cross-sectional study of protocols used for all 7911 MPI studies performed in 308 nuclear cardiology laboratories in 65 countries for a single week in March-April 2013. Eight ‘best practices' relating to radiation exposure were identified a priori by an expert committee, and a radiation-related quality index (QI) devised indicating the number of best practices used by a laboratory. Patient radiation effective dose (ED) ranged between 0.8 and 35.6 mSv (median 10.0 mSv). Average laboratory ED ranged from 2.2 to 24.4 mSv (median 10.4 mSv); only 91 (30%) laboratories achieved the median ED ≤ 9 mSv recommended by guidelines. Laboratory QIs ranged from 2 to 8 (median 5). Both ED and QI differed significantly between laboratories, countries, and world regions. The lowest median ED (8.0 mSv), in Europe, coincided with high best-practice adherence (mean laboratory QI 6.2). The highest doses (median 12.1 mSv) and low QI (4.9) occurred in Latin America. In hierarchical regression modelling, patients undergoing MPI at laboratories following more ‘best practices' had lower EDs. Conclusion Marked worldwide variation exists in radiation safety practices pertaining to MPI, with targeted EDs currently achieved in a minority of laboratories. The significant relationship between best-practice implementation and lower doses indicates numerous opportunities to reduce radiation exposure from MPI globall

    Risk Factors for and Prediction of Post-Intubation Hypotension in Critically Ill Adults: A Multicenter Prospective Cohort Study

    Get PDF
    OBJECTIVE: Hypotension following endotracheal intubation in the ICU is associated with poor outcomes. There is no formal prediction tool to help estimate the onset of this hemodynamic compromise. Our objective was to derive and validate a prediction model for immediate hypotension following endotracheal intubation. METHODS: A multicenter, prospective, cohort study enrolling 934 adults who underwent endotracheal intubation across 16 medical/surgical ICUs in the United States from July 2015-January 2017 was conducted to derive and validate a prediction model for immediate hypotension following endotracheal intubation. We defined hypotension as: 1) mean arterial pressure \u3c 65 mmHg; 2) systolic blood pressure \u3c 80 mmHg and/or decrease in systolic blood pressure of 40% from baseline; 3) or the initiation or increase in any vasopressor in the 30 minutes following endotracheal intubation. RESULTS: Post-intubation hypotension developed in 344 (36.8%) patients. In the full cohort, 11 variables were independently associated with hypotension: increasing illness severity; increasing age; sepsis diagnosis; endotracheal intubation in the setting of cardiac arrest, mean arterial pressure \u3c 65 mmHg, and acute respiratory failure; diuretic use 24 hours preceding endotracheal intubation; decreasing systolic blood pressure from 130 mmHg; catecholamine and phenylephrine use immediately prior to endotracheal intubation; and use of etomidate during endotracheal intubation. A model excluding unstable patients’ pre-intubation (those receiving catecholamine vasopressors and/or who were intubated in the setting of cardiac arrest) was also developed and included the above variables with the exception of sepsis and etomidate. In the full cohort, the 11 variable model had a C-statistic of 0.75 (95% CI 0.72, 0.78). In the stable cohort, the 7 variable model C-statistic was 0.71 (95% CI 0.67, 0.75). In both cohorts, a clinical risk score was developed stratifying patients’ risk of hypotension. CONCLUSIONS: A novel multivariable risk score predicted post-intubation hypotension with accuracy in both unstable and stable critically ill patients. STUDY REGISTRATION: Clinicaltrials.gov identifier: NCT02508948 and Registered Report Identifier: RR2-10.2196/11101

    Insulin resistance, lipotoxicity, type 2 diabetes and atherosclerosis: the missing links. The Claude Bernard Lecture 2009

    Get PDF
    Insulin resistance is a hallmark of type 2 diabetes mellitus and is associated with a metabolic and cardiovascular cluster of disorders (dyslipidaemia, hypertension, obesity [especially visceral], glucose intolerance, endothelial dysfunction), each of which is an independent risk factor for cardiovascular disease (CVD). Multiple prospective studies have documented an association between insulin resistance and accelerated CVD in patients with type 2 diabetes, as well as in non-diabetic individuals. The molecular causes of insulin resistance, i.e. impaired insulin signalling through the phosphoinositol-3 kinase pathway with intact signalling through the mitogen-activated protein kinase pathway, are responsible for the impairment in insulin-stimulated glucose metabolism and contribute to the accelerated rate of CVD in type 2 diabetes patients. The current epidemic of diabetes is being driven by the obesity epidemic, which represents a state of tissue fat overload. Accumulation of toxic lipid metabolites (fatty acyl CoA, diacylglycerol, ceramide) in muscle, liver, adipocytes, beta cells and arterial tissues contributes to insulin resistance, beta cell dysfunction and accelerated atherosclerosis, respectively, in type 2 diabetes. Treatment with thiazolidinediones mobilises fat out of tissues, leading to enhanced insulin sensitivity, improved beta cell function and decreased atherogenesis. Insulin resistance and lipotoxicity represent the missing links (beyond the classical cardiovascular risk factors) that help explain the accelerated rate of CVD in type 2 diabetic patients

    BLOOM: A 176B-Parameter Open-Access Multilingual Language Model

    Full text link
    Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License
    • …
    corecore