365 research outputs found

    Bolometry for Divertor Characterization and Control

    Get PDF
    Operation of the divertor will provide one of the greatest challenges for ITER. Up to 400 MW of power is expected to be produced in the core plasma which must then be handled by plasma facing components. Power flowing across the separatrix and into the scrape-off-layer (SOL) can lead to a heat flux in the divertor of 30 MW/m{sup 2} if nothing is done to dissipate the power. This peak heat flux must be reduced to 5 MW/m{sup 2} for an acceptable engineering design. The current plan is to use impurity radiation and other atomic processes from intrinsic or injected impurities to spread out the power onto the first wall and divertor chamber walls. It is estimated that 300 MW of radiation in the divertor and SOL will be necessary to achieve this solution. Measurement of the magnitude and distribution of this radiated power with bolometry will be important for understanding and controlling the nER divertor. Present experiments have shown intense regions of radiation both in the divertor near the separatrix and in the X-point region. The task of a divertor bolometer system will be to measure the distribution and magnitude of this radiation. First, radiation measurements can be used for machine protection. Intense divertor radiation will heat plasma facing surfaces that are not in direct view of temperature monitors. Measurement of the radiation distribution will provide information about the power flux to these components. Secondly, a bolometer diagnostic is a basic tool for divertor characterization and understanding. Radiation measurements are important for power accounting, as a cross check for other power diagnostics, and gross characterisation of the plasma behavior. A divertor bolometer system can provide a 2-D measurement of the radiation profile for comparison with theory and modeling. Finally a bolometer system can provide realtime signals for control of the divertor operation

    OPA1 disease alleles causing dominant optic atrophy have defects in cardiolipin-stimulated GTP hydrolysis and membrane tubulation

    Get PDF
    The dynamin-related GTPase OPA1 is mutated in autosomal dominant optic atrophy (DOA) (Kjer type), an inherited neuropathy of the retinal ganglion cells. OPA1 is essential for the fusion of the inner mitochondrial membranes, but its mechanism of action remains poorly understood. Here we show that OPA1 has a low basal rate of GTP hydrolysis that is dramatically enhanced by association with liposomes containing negative phospholipids such as cardiolipin. Lipid association triggers assembly of OPA1 into higher order oligomers. In addition, we find that OPA1 can promote the protrusion of lipid tubules from the surface of cardiolipin-containing liposomes. In such lipid protrusions, OPA1 assemblies are observed on the outside of the lipid tubule surface, a protein-membrane topology similar to that of classical dynamins. The membrane tubulation activity of OPA1 is suppressed by GTPÎłS. OPA1 disease alleles associated with DOA display selective defects in several activities, including cardiolipin association, GTP hydrolysis and membrane tubulation. These findings indicate that interaction of OPA1 with membranes can stimulate higher order assembly, enhance GTP hydrolysis and lead to membrane deformation into tubules

    Stillbirth risk prediction using machine learning for a large cohort of births from Western Australia, 1980–2015

    Get PDF
    Quantification of stillbirth risk has potential to support clinical decision-making. Studies that have attempted to quantify stillbirth risk have been hampered by small event rates, a limited range of predictors that typically exclude obstetric history, lack of validation, and restriction to a single classifier (logistic regression). Consequently, predictive performance remains low, and risk quantification has not been adopted into antenatal practice. The study population consisted of all births to women in Western Australia from 1980 to 2015, excluding terminations. After all exclusions there were 947,025 livebirths and 5,788 stillbirths. Predictive models for stillbirth were developed using multiple machine learning classifiers: regularised logistic regression, decision trees based on classification and regression trees, random forest, extreme gradient boosting (XGBoost), and a multilayer perceptron neural network. We applied 10-fold cross-validation using independent data not used to develop the models. Predictors included maternal socio-demographic characteristics, chronic medical conditions, obstetric complications and family history in both the current and previous pregnancy. In this cohort, 66% of stillbirths were observed for multiparous women. The best performing classifier (XGBoost) predicted 45% (95% CI: 43%, 46%) of stillbirths for all women and 45% (95% CI: 43%, 47%) of stillbirths after the inclusion of previous pregnancy history. Almost half of stillbirths could be potentially identified antenatally based on a combination of current pregnancy complications, congenital anomalies, maternal characteristics, and medical history. Greatest sensitivity is achieved with addition of current pregnancy complications. Ensemble classifiers offered marginal improvement for prediction compared to logistic regression

    Stillbirth risk prediction using machine learning for a large cohort of births from Western Australia, 1980–2015

    Get PDF
    Quantification of stillbirth risk has potential to support clinical decision-making. Studies that have attempted to quantify stillbirth risk have been hampered by small event rates, a limited range of predictors that typically exclude obstetric history, lack of validation, and restriction to a single classifier (logistic regression). Consequently, predictive performance remains low, and risk quantification has not been adopted into antenatal practice. The study population consisted of all births to women in Western Australia from 1980 to 2015, excluding terminations. After all exclusions there were 947,025 livebirths and 5,788 stillbirths. Predictive models for stillbirth were developed using multiple machine learning classifiers: regularised logistic regression, decision trees based on classification and regression trees, random forest, extreme gradient boosting (XGBoost), and a multilayer perceptron neural network. We applied 10-fold cross-validation using independent data not used to develop the models. Predictors included maternal socio-demographic characteristics, chronic medical conditions, obstetric complications and family history in both the current and previous pregnancy. In this cohort, 66% of stillbirths were observed for multiparous women. The best performing classifier (XGBoost) predicted 45% (95% CI: 43%, 46%) of stillbirths for all women and 45% (95% CI: 43%, 47%) of stillbirths after the inclusion of previous pregnancy history. Almost half of stillbirths could be potentially identified antenatally based on a combination of current pregnancy complications, congenital anomalies, maternal characteristics, and medical history. Greatest sensitivity is achieved with addition of current pregnancy complications. Ensemble classifiers offered marginal improvement for prediction compared to logistic regression

    Empirical Phi-Discrepancies and Quasi-Empirical Likelihood: Exponential Bounds

    Get PDF
    We review some recent extensions of the so-called generalized empirical likelihood method, when the Kullback distance is replaced by some general convex divergence. We propose to use, instead of empirical likelihood, some regularized form or quasi-empirical likelihood method, corresponding to a convex combination of Kullback and χ2 discrepancies. We show that for some adequate choice of the weight in this combination, the corresponding quasi-empirical likelihood is Bartlett-correctable. We also establish some non-asymptotic exponential bounds for the confidence regions obtained by using this method. These bounds are derived via bounds for self-normalized sums in the multivariate case obtained in a previous work by the authors. We also show that this kind of results may be extended to process valued infinite dimensional parameters. In this case some known results about self-normalized processes may be used to control the behavior of generalized empirical likelihood

    Circumstellar interaction in supernovae in dense environments - an observational perspective

    Full text link
    In a supernova explosion, the ejecta interacting with the surrounding circumstellar medium (CSM) give rise to variety of radiation. Since CSM is created from the mass lost from the progenitor star, it carries footprints of the late time evolution of the star. This is one of the unique ways to get a handle on the nature of the progenitor star system. Here, I will focus mainly on the supernovae (SNe) exploding in dense environments, a.k.a. Type IIn SNe. Radio and X-ray emission from this class of SNe have revealed important modifications in their radiation properties, due to the presence of high density CSM. Forward shock dominance of the X-ray emission, internal free-free absorption of the radio emission, episodic or non-steady mass loss rate, asymmetry in the explosion seem to be common properties of this class of SNe.Comment: Fixed minor typos. 31 pages, 9 figures, accepted for publication in Space Science Reviews. Chapter in International Space Science Institute (ISSI) Book on "Supernovae" to be published in Space Science Reviews by Springe

    Towards Machine Wald

    Get PDF
    The past century has seen a steady increase in the need of estimating and predicting complex systems and making (possibly critical) decisions with limited information. Although computers have made possible the numerical evaluation of sophisticated statistical models, these models are still designed \emph{by humans} because there is currently no known recipe or algorithm for dividing the design of a statistical model into a sequence of arithmetic operations. Indeed enabling computers to \emph{think} as \emph{humans} have the ability to do when faced with uncertainty is challenging in several major ways: (1) Finding optimal statistical models remains to be formulated as a well posed problem when information on the system of interest is incomplete and comes in the form of a complex combination of sample data, partial knowledge of constitutive relations and a limited description of the distribution of input random variables. (2) The space of admissible scenarios along with the space of relevant information, assumptions, and/or beliefs, tend to be infinite dimensional, whereas calculus on a computer is necessarily discrete and finite. With this purpose, this paper explores the foundations of a rigorous framework for the scientific computation of optimal statistical estimators/models and reviews their connections with Decision Theory, Machine Learning, Bayesian Inference, Stochastic Optimization, Robust Optimization, Optimal Uncertainty Quantification and Information Based Complexity.Comment: 37 page
    • 

    corecore