17,801 research outputs found

    ELM regime classification by conformal prediction on an information manifold

    Get PDF
    Characterization and control of plasma instabilities known as edge-localized modes (ELMs) is crucial for the operation of fusion reactors. Recently, machine learning methods have demonstrated good potential in making useful inferences from stochastic fusion data sets. However, traditional classification methods do not offer an inherent estimate of the goodness of their prediction. In this paper, a distance-based conformal predictor classifier integrated with a geometric-probabilistic framework is presented. The first benefit of the approach lies in its comprehensive treatment of highly stochastic fusion data sets, by modeling the measurements with probability distributions in a metric space. This enables calculation of a natural distance measure between probability distributions: the Rao geodesic distance. Second, the predictions are accompanied by estimates of their accuracy and reliability. The method is applied to the classification of regimes characterized by different types of ELMs based on the measurements of global parameters and their error bars. This yields promising success rates and outperforms state-of-the-art automatic techniques for recognizing ELM signatures. The estimates of goodness of the predictions increase the confidence of classification by ELM experts, while allowing more reliable decisions regarding plasma control and at the same time increasing the robustness of the control system

    The validation of the rating for sustainable subdivision neighbourhood design (RSSND) in Bangkok Metropolitan Region (BMR), Thailand

    Get PDF
    In recent years, the problems resulting from unsustainable subdivision development have become significant problems in the Bangkok Metropolitan Region (BMR), Thailand. Numbers of government departments and agencies have tried to eliminate the problems by introducing the rating tools to encourage the higher sustainability levels of subdivision development in BMR, such as the Environmental Impact Assessment Monitoring Award (EIA-MA) and the Thai’s Rating for Energy and Environmental Sustainability of New construction and major renovation (TREES-NC). However, the EIA-MA has included the neighbourhood designs in the assessment criteria, but this requirement applies to large projects only. Meanwhile, TREES-NC has focused only on large scale buildings such as condominiums, office buildings, and is not specific for subdivision neighbourhood designs. Recently, the new rating tool named “Rating for Subdivision Neighbourhood Sustainability Design (RSNSD)” has been developed. Therefore, the validation process of RSNSD is still required. This paper aims to validate the new rating tool for subdivision neighbourhood design in BMR. The RSNSD has been validated by applying the rating tool to eight case study subdivisions. The result of RSNSD by data generated through surveying subdivisions will be compared to the existing results from the EIA-MA. The selected cases include of one “Excellent Award”, two “Very Good Award”, and five non-rated subdivision developments. This paper expects to prove the credibility of RSNSD before introducing to the real subdivision development practises. The RSNSD could be useful to encourage higher sustainability subdivision design level, and then protect the problems from further subdivision development in BMR

    Analysis reuse exploiting taxonomical information and belief assignment in industrial problem solving

    Get PDF
    To take into account the experience feedback on solving complex problems in business is deemed as a way to improve the quality of products and processes. Only a few academic works, however, are concerned with the representation and the instrumentation of experience feedback systems. We propose, in this paper, a model of experiences and mechanisms to use these experiences. More specifically, we wish to encourage the reuse of already performed expert analysis to propose a priori analysis in the solving of a new problem. The proposal is based on a representation in the context of the experience of using a conceptual marker and an explicit representation of the analysis incorporating expert opinions and the fusion of these opinions. The experience feedback models and inference mechanisms are integrated in a commercial support tool for problem solving methodologies. The results obtained to this point have already led to the definition of the role of ‘‘Rex Manager’’ with principles of sustainable management for continuous improvement of industrial processes in companies

    Continuous Improvement Through Knowledge-Guided Analysis in Experience Feedback

    Get PDF
    Continuous improvement in industrial processes is increasingly a key element of competitiveness for industrial systems. The management of experience feedback in this framework is designed to build, analyze and facilitate the knowledge sharing among problem solving practitioners of an organization in order to improve processes and products achievement. During Problem Solving Processes, the intellectual investment of experts is often considerable and the opportunities for expert knowledge exploitation are numerous: decision making, problem solving under uncertainty, and expert configuration. In this paper, our contribution relates to the structuring of a cognitive experience feedback framework, which allows a flexible exploitation of expert knowledge during Problem Solving Processes and a reuse such collected experience. To that purpose, the proposed approach uses the general principles of root cause analysis for identifying the root causes of problems or events, the conceptual graphs formalism for the semantic conceptualization of the domain vocabulary and the Transferable Belief Model for the fusion of information from different sources. The underlying formal reasoning mechanisms (logic-based semantics) in conceptual graphs enable intelligent information retrieval for the effective exploitation of lessons learned from past projects. An example will illustrate the application of the proposed approach of experience feedback processes formalization in the transport industry sector

    Model Verification and Validation Concepts for a Probabilistic Fracture Assessment Model to Predict Cracking of Knife Edge Seals in the Space Shuttle Main Engine High Pressure Oxidizer

    Get PDF
    Physics-based models are routinely used to predict the performance of engineered systems to make decisions such as when to retire system components, how to extend the life of an aging system, or if a new design will be safe or available. Model verification and validation (V&V) is a process to establish credibility in model predictions. Ideally, carefully controlled validation experiments will be designed and performed to validate models or submodels. In reality, time and cost constraints limit experiments and even model development. This paper describes elements of model V&V during the development and application of a probabilistic fracture assessment model to predict cracking in space shuttle main engine high-pressure oxidizer turbopump knife-edge seals. The objective of this effort was to assess the probability of initiating and growing a crack to a specified failure length in specific flight units for different usage and inspection scenarios. The probabilistic fracture assessment model developed in this investigation combined a series of submodels describing the usage, temperature history, flutter tendencies, tooth stresses and numbers of cycles, fatigue cracking, nondestructive inspection, and finally the probability of failure. The analysis accounted for unit-to-unit variations in temperature, flutter limit state, flutter stress magnitude, and fatigue life properties. The investigation focused on the calculation of relative risk rather than absolute risk between the usage scenarios. Verification predictions were first performed for three units with known usage and cracking histories to establish credibility in the model predictions. Then, numerous predictions were performed for an assortment of operating units that had flown recently or that were projected for future flights. Calculations were performed using two NASA-developed software tools: NESSUS(Registered Trademark) for the probabilistic analysis, and NASGRO(Registered Trademark) for the fracture mechanics analysis. The goal of these predictions was to provide additional information to guide decisions on the potential of reusing existing and installed units prior to the new design certification

    Climate Change Uncertainty Quantification: Lessons Learned from the Joint EU-USNRC Project on Uncertainty Analysis of Probabilistic Accident Consequence Codes

    Get PDF
    Between 1990 and 2000 the U.S. Nuclear Regulatory Commission and the Commission of the European Communities conducted a joint uncertainty analysis of accident consequences for nuclear power plants. This study remains a benchmark for uncertainty analysis of large models involving high risks with high public visibility, and where substantial uncertainty exists. The study set standards with regard to structured expert judgment, performance assessment, dependence elicitation and modeling and uncertainty propagation of high dimensional distributions with complex dependence. The integrated assessment models for the economic effects of climate change also involve high risks and large uncertainties, and interest in conducting a proper uncertainty analysis is growing. This article reviews the EU-USNRC effort and extracts lessons learned, with a view toward informing a comparable effort for the economic effects of climate change.uncertainty analysis, expert judgment, expert elicitation, probabilistic inversion, dependence modeling, nuclear safety

    Assessment of sensor performance

    Get PDF
    There is an international commitment to develop a comprehensive, coordinated and sustained ocean observation system. However, a foundation for any observing, monitoring or research effort is effective and reliable in situ sensor technologies that accurately measure key environmental parameters. Ultimately, the data used for modelling efforts, management decisions and rapid responses to ocean hazards are only as good as the instruments that collect them. There is also a compelling need to develop and incorporate new or novel technologies to improve all aspects of existing observing systems and meet various emerging challenges. Assessment of Sensor Performance was a cross-cutting issues session at the international OceanSensors08 workshop in Warnemünde, Germany, which also has penetrated some of the papers published as a result of the workshop (Denuault, 2009; Kröger et al., 2009; Zielinski et al., 2009). The discussions were focused on how best to classify and validate the instruments required for effective and reliable ocean observations and research. The following is a summary of the discussions and conclusions drawn from this workshop, which specifically addresses the characterisation of sensor systems, technology readiness levels, verification of sensor performance and quality management of sensor systems
    corecore