3,443 research outputs found

    A Sequential Inspection Procedure for Fault Detection in Multistage Manufacturing Processes

    Get PDF
    Fault diagnosis in multistage manufacturing processes (MMPs) is a challenging task where most of the research presented in the literature considers a predefined inspection scheme to identify the sources of variation and make the process diagnosable. In this paper, a sequential inspection procedure to detect the process fault based on a sequential testing algorithm and a minimum monitoring system is proposed. After the monitoring system detects that the process is out of statistical control, the features to be inspected (end of line or in process measurements) are defined sequentially according to the expected information gain of each potential inspection measurement. A case study is analyzed to prove the benefits of this approach with respect to a predefined inspection scheme and a randomized sequential inspection considering both the use and non-use of fault probabilities from historical maintenance data

    Data generation and model usage for machine learning-based dynamic security assessment and control

    Get PDF
    The global effort to decarbonise, decentralise and digitise electricity grids in response to climate change and evolving electricity markets with active consumers (prosumers) is gaining traction in countries around the world. This effort introduces new challenges to electricity grid operation. For instance, the introduction of variable renewable energy generation like wind and solar energy to replace conventional power generation like oil, gas, and coal increases the uncertainty in power systems operation. Additionally, the dynamics introduced by these renewable energy sources that are interfaced through converters are much faster than those in conventional system with thermal power plants. This thesis investigates new operating tools for the system operator that are data-driven to help manage the increased operational uncertainty in this transition. The presented work aims to an- swer some open questions regarding the implementation of these machine learning approaches in real-time operation, primarily related to the quality of training data to train accurate machine- learned models for predicting dynamic behaviour, and the use of these machine-learned models in the control room for real-time operation. To answer the first question, this thesis presents a novel sampling approach for generating ’rare’ operating conditions that are physically feasible but have not been experienced by power systems before. In so doing, the aim is to move away from historical observations that are often limited in describing the full range of operating conditions. Then, the thesis presents a novel approach based on Wasserstein distance and entropy to efficiently combine both historical and ’rare’ operating conditions to create an enriched database capable of training a high- performance classifier. To answer the second question, this thesis presents a scalable and rigorous workflow to trade-off multiple objective criteria when choosing decision tree models for real-time operation by system operators. Then, showcases a practical implementation for using a machine-learned model to optimise power system operation cost using topological control actions. Future research directions are underscored by the crucial role of machine learning in securing low inertia systems, and this thesis identifies research gaps covering physics-informed learning, machine learning-based network planning for secure operation, and robust training datasets are outlined.Open Acces

    Resource Allocation Framework: Validation of Numerical Models of Complex Engineering Systems against Physical Experiments

    Get PDF
    An increasing reliance on complex numerical simulations for high consequence decision making is the motivation for experiment-based validation and uncertainty quantification to assess, and when needed, to improve the predictive capabilities of numerical models. Uncertainties and biases in model predictions can be reduced by taking two distinct actions: (i) increasing the number of experiments in the model calibration process, and/or (ii) improving the physics sophistication of the numerical model. Therefore, decision makers must select between further code development and experimentation while allocating the finite amount of available resources. This dissertation presents a novel framework to assist in this selection between experimentation and code development for model validation strictly from the perspective of predictive capability. The reduction and convergence of discrepancy bias between model prediction and observation, computed using a suitable convergence metric, play a key role in the conceptual formulation of the framework. The proposed framework is demonstrated using two non-trivial case study applications on the Preston-Tonks-Wallace (PTW) code, which is a continuum-based plasticity approach to modeling metals, and the ViscoPlastic Self-Consistent (VPSC) code which is a mesoscopic plasticity approach to modeling crystalline materials. Results show that the developed resource allocation framework is effective and efficient in path selection (i.e. experimentation and/or code development) resulting in a reduction in both model uncertainties and discrepancy bias. The framework developed herein goes beyond path selection in the validation of numerical models by providing a methodology for the prioritization of optimal experimental settings and an algorithm for prioritization of code development. If the path selection algorithm selects the experimental path, optimal selection of the settings at which these physical experiments are conducted as well as the sequence of these experiments is vital to maximize the gain in predictive capability of a model. The Batch Sequential Design (BSD) is a methodology utilized in this work to achieve the goal of selecting the optimal experimental settings. A new BSD selection criterion, Coverage Augmented Expected Improvement for Predictive Stability (C-EIPS), is developed to minimize the maximum reduction in the model discrepancy bias and coverage of the experiments within the domain of applicability. The functional form of the new criterion, C-EIPS, is demonstrated to outperform its predecessor, the EIPS criterion, and the distance-based criterion when discrepancy bias is high and coverage is low, while exhibiting a comparable performance to the distance-based criterion in efficiently maximizing the predictive capability of the VPSC model as discrepancy decreases and coverage increases. If the path selection algorithm selects the code development path, the developed framework provides an algorithm for the prioritization of code development efforts. In coupled systems, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. As such, a unique and quantitative code prioritization index (CPI) is proposed to accomplish the task of prioritizing code development efforts, and its application is demonstrated on a case study of a steel frame with semi-rigid connections. Findings show that the CPI is effective in identifying the most critical constituent of the coupled system, whose improvement leads to the highest overall enhancement of the predictive capability of the coupled model

    Unmanned Aerial Systems for Wildland and Forest Fires

    Full text link
    Wildfires represent an important natural risk causing economic losses, human death and important environmental damage. In recent years, we witness an increase in fire intensity and frequency. Research has been conducted towards the development of dedicated solutions for wildland and forest fire assistance and fighting. Systems were proposed for the remote detection and tracking of fires. These systems have shown improvements in the area of efficient data collection and fire characterization within small scale environments. However, wildfires cover large areas making some of the proposed ground-based systems unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial Systems (UAS) were proposed. UAS have proven to be useful due to their maneuverability, allowing for the implementation of remote sensing, allocation strategies and task planning. They can provide a low-cost alternative for the prevention, detection and real-time support of firefighting. In this paper we review previous work related to the use of UAS in wildfires. Onboard sensor instruments, fire perception algorithms and coordination strategies are considered. In addition, we present some of the recent frameworks proposing the use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at: https://doi.org/10.3390/drones501001

    Modelling and data validation for the energy analysis of absorption refrigeration systems

    Get PDF
    Data validation and reconciliation techniques have been extensively used in the process industry to improve the data accuracy. These techniques exploit the redundancy in the measurements in order to obtain a set of adjusted measurements that satisfy the plant model. Nevertheless, not many applications deal with closed cycles with complex connectivity and recycle loops, as in absorption refrigeration cycles. This thesis proposes a methodology for the steady-state data validation of absorption refrigeration systems. This methodology includes the identification of steady-state, resolution of the data reconciliation and parameter estimation problems and the detection and elimination of gross errors. The methodology developed through this thesis will be useful for generating a set of coherent measurements and operation parameters of an absorption chiller for downstream applications: performance calculation, development of empirical models, optimisation, etc. The methodology is demonstrated using experimental data of different types of absorption refrigeration systems with different levels of redundancy.Los procedimientos de validación y reconciliación de datos se han utilizado en la industria de procesos para mejorar la precisión de los datos. Estos procedimientos aprovechan la redundancia enlas mediciones para obtener un conjunto de datos ajustados que satisfacen el modelo de la planta. Sin embargo, no hay muchas aplicaciones que traten con ciclos cerrados, y configuraciones complejas, como los ciclos de refrigeración por absorción. Esta tesis propone una metodología para la validación de datos en estado estacionario de enfriadoras de absorción. Estametodología incluye la identificación del estado estacionario, la resolución de los problemas de reconciliación de datos y estimación de parámetrosy la detección de errores sistemáticos. Esta metodología será útil para generar un conjunto de medidas coherentes para aplicaciones como: cálculo de prestaciones, desarrollo de modelos empíricos, optimización, etc. La metodología es demostrada utilizando datos experimentales de diferentes enfriadoras de absorción, con diferentes niveles de redundancia
    corecore