86 research outputs found

    La prise de décision et la modélisation d’incertitude pour l’analyse multi-critère des systèmes complexes énergétiques

    Get PDF
    This Ph. D. work addresses the vulnerability analysis of safety-critical systems (e.g., nuclear power plants) within a framework that combines the disciplines of risk analysis and multi-criteria decision-making. The scientific contribution follows four directions: (i) a quantitative hierarchical model is developed to characterize the susceptibility of safety-critical systems to multiple types of hazard, within the needed `all-hazard' view of the problem currently emerging in the risk analysis field; (ii) the quantitative assessment of vulnerability is tackled by an empirical classification framework: to this aim, a model, relying on the Majority Rule Sorting (MR-Sort) Method, typically used in the decision analysis field, is built on the basis of a (limited-size) set of data representing (a priori-known) vulnerability classification examples; (iii) three different approaches (namely, a model-retrieval-based method, the Bootstrap method and the leave-one-out cross-validation technique) are developed and applied to provide a quantitative assessment of the performance of the classification model (in terms of accuracy and confidence in the assignments), accounting for the uncertainty introduced into the analysis by the empirical construction of the vulnerability model; (iv) on the basis of the models developed, an inverse classification problem is solved to identify a set of protective actions which effectively reduce the level of vulnerability of the critical system under consideration. Two approaches are developed to this aim: the former is based on a novel sensitivity indicator, the latter on optimization.Applications on fictitious and real case studies in the nuclear power plant risk field demonstrate the effectiveness of the proposed methodology.Ce travail de thèse doctorale traite l'analyse de la vulnérabilité des systèmes critiques pour la sécurité (par exemple, les centrales nucléaires) dans un cadre qui combine les disciplines de l'analyse des risques et de la prise de décision de multi-critères.La contribution scientifique suit quatre directions: (i) un modèle hiérarchique et quantitative est développé pour caractériser la susceptibilité des systèmes critiques pour la sécurité à plusieurs types de danger, en ayant la vue de `tous risques' sur le problème actuellement émergeant dans le domaine de l'analyse des risques; (ii) l'évaluation quantitative de la vulnérabilité est abordé par un cadre de classification empirique: à cette fin, un modèle, en se fondant sur la Majority Rule Sorting (MR-Sort) Méthode, généralement utilisés dans le domaine de la prise de décision, est construit sur la base d'un ensemble de données (en taille limitée) représentant (a priori connu) des exemples de classification de vulnérabilité; (iii) trois approches différentes (à savoir, une model-retrieval-based méthode, la méthode Bootstrap et la technique de validation croisée leave-one-out) sont élaborées et appliquées pour fournir une évaluation quantitative de la performance du modèle de classification (en termes de précision et de confiance dans les classifications), ce qui représente l'incertitude introduite dans l'analyse par la construction empirique du modèle de la vulnérabilité; (iv) basé sur des modèles développés, un problème de classification inverse est résolu à identifier un ensemble de mesures de protection qui réduisent efficacement le niveau de vulnérabilité du système critique à l’étude. Deux approches sont développées dans cet objectif: le premier est basé sur un nouvel indicateur de sensibilité, ce dernier sur l'optimisation.Les applications sur des études de cas fictifs et réels dans le domaine des risques de centrales nucléaires démontrent l'efficacité de la méthode proposée

    Incorporate Modeling Uncertainty into the Decision Making of Passive System Reliability Assessment

    No full text
    International audiencePassive safety features will play crucial roles in the development of the future generation nuclear power plant technologies. However because of the insufficient experiences, researches and validations are still necessary with the aim to prove the actual performance and reliability of the passive systems. Uncertain resources, which will influence the reliability and performance of such systems, can be divided into two groups: modeling uncertainties and parametric uncertainties. Up to now, the researchers have as good as established ways to quantify the effects caused by the parameter uncertainties, e.g. the variation of physical parameters (environment temperature, fabrication error, etc.) and have already got a number of achievements. In addition to the parameter uncertainty, the modeling uncertainty, e.g. uncertain physical phenomenon, uncertainties by different modeling techniques, etc. shall also be an important contributor to the passive system performance. How to take into account the effect caused by this kind of factors, there hasn't any mature approaches. In this paper, a survey of researches about the modeling uncertainty from the open literature is presented. A framework to incorporate the modeling uncertainty into the decision making of passive system reliability assessment will be proposed based on the survey and the discussion

    Symmetry and designability for lattice protein models

    Full text link
    Native protein folds often have a high degree of symmetry. We study the relationship between the symmetries of native proteins, and their designabilities -- how many different sequences encode a given native structure. Using a two-dimensional lattice protein model based on hydrophobicity, we find that those native structures that are encoded by the largest number of different sequences have high symmetry. However only certain symmetries are enhanced, e.g. x/y-mirror symmetry and 180o180^o rotation, while others are suppressed. If it takes a large number of mutations to destabilize the native state of a protein, then, by definition, the state is highly designable. Hence, our findings imply that insensitivity to mutation implies high symmetry. It appears that the relationship between designability and symmetry results because protein substructures are also designable. Native protein folds may therefore be symmetric because they are composed of repeated designable substructures.Comment: 13 pages, 10 figure

    Deep learning-based holographic polarization microscopy

    Full text link
    Polarized light microscopy provides high contrast to birefringent specimen and is widely used as a diagnostic tool in pathology. However, polarization microscopy systems typically operate by analyzing images collected from two or more light paths in different states of polarization, which lead to relatively complex optical designs, high system costs or experienced technicians being required. Here, we present a deep learning-based holographic polarization microscope that is capable of obtaining quantitative birefringence retardance and orientation information of specimen from a phase recovered hologram, while only requiring the addition of one polarizer/analyzer pair to an existing holographic imaging system. Using a deep neural network, the reconstructed holographic images from a single state of polarization can be transformed into images equivalent to those captured using a single-shot computational polarized light microscope (SCPLM). Our analysis shows that a trained deep neural network can extract the birefringence information using both the sample specific morphological features as well as the holographic amplitude and phase distribution. To demonstrate the efficacy of this method, we tested it by imaging various birefringent samples including e.g., monosodium urate (MSU) and triamcinolone acetonide (TCA) crystals. Our method achieves similar results to SCPLM both qualitatively and quantitatively, and due to its simpler optical design and significantly larger field-of-view, this method has the potential to expand the access to polarization microscopy and its use for medical diagnosis in resource limited settings.Comment: 20 pages, 8 figure

    Semiconductor Surface Studies

    Get PDF
    Contains an introduction, reports on two research projects and a list of publications.Joint Services Electronics Program Grant DAAH04-95-1-003

    A method to measure heat flux in convection using Gardon gauge

    Get PDF
    This work was supported by the National Natural Science Foundation of China (No. 51576110), the Science Fund for Creative Research Groups of the National Natural Science Foundation of China (No. 51321002) and the Program for New Century Excellent Talents in University (NCET-13-0315)

    Robust estimation of bacterial cell count from optical density

    Get PDF
    Optical density (OD) is widely used to estimate the density of cells in liquid culture, but cannot be compared between instruments without a standardized calibration protocol and is challenging to relate to actual cell count. We address this with an interlaboratory study comparing three simple, low-cost, and highly accessible OD calibration protocols across 244 laboratories, applied to eight strains of constitutive GFP-expressing E. coli. Based on our results, we recommend calibrating OD to estimated cell count using serial dilution of silica microspheres, which produces highly precise calibration (95.5% of residuals <1.2-fold), is easily assessed for quality control, also assesses instrument effective linear range, and can be combined with fluorescence calibration to obtain units of Molecules of Equivalent Fluorescein (MEFL) per cell, allowing direct comparison and data fusion with flow cytometry measurements: in our study, fluorescence per cell measurements showed only a 1.07-fold mean difference between plate reader and flow cytometry data

    Search for dark matter produced in association with bottom or top quarks in √s = 13 TeV pp collisions with the ATLAS detector

    Get PDF
    A search for weakly interacting massive particle dark matter produced in association with bottom or top quarks is presented. Final states containing third-generation quarks and miss- ing transverse momentum are considered. The analysis uses 36.1 fb−1 of proton–proton collision data recorded by the ATLAS experiment at √s = 13 TeV in 2015 and 2016. No significant excess of events above the estimated backgrounds is observed. The results are in- terpreted in the framework of simplified models of spin-0 dark-matter mediators. For colour- neutral spin-0 mediators produced in association with top quarks and decaying into a pair of dark-matter particles, mediator masses below 50 GeV are excluded assuming a dark-matter candidate mass of 1 GeV and unitary couplings. For scalar and pseudoscalar mediators produced in association with bottom quarks, the search sets limits on the production cross- section of 300 times the predicted rate for mediators with masses between 10 and 50 GeV and assuming a dark-matter mass of 1 GeV and unitary coupling. Constraints on colour- charged scalar simplified models are also presented. Assuming a dark-matter particle mass of 35 GeV, mediator particles with mass below 1.1 TeV are excluded for couplings yielding a dark-matter relic density consistent with measurements
    corecore