3,443 research outputs found
A Sequential Inspection Procedure for Fault Detection in Multistage Manufacturing Processes
Fault diagnosis in multistage manufacturing processes (MMPs) is a challenging task where most of the research presented in the literature considers a predefined inspection scheme to identify the sources of variation and make the process diagnosable. In this paper, a sequential inspection procedure to detect the process fault based on a sequential testing algorithm and a minimum monitoring system is proposed. After the monitoring system detects that the process is out of statistical control, the features to be inspected (end of line or in process measurements) are defined sequentially according to the expected information gain of each potential inspection measurement. A case study is analyzed to prove the benefits of this approach with respect to a predefined inspection scheme and a randomized sequential inspection considering both the use and non-use of fault probabilities from historical maintenance data
Data generation and model usage for machine learning-based dynamic security assessment and control
The global effort to decarbonise, decentralise and digitise electricity grids in response to climate change and evolving electricity markets with active consumers (prosumers) is gaining traction in countries around the world. This effort introduces new challenges to electricity grid operation. For instance, the introduction of variable renewable energy generation like wind and solar energy to replace conventional power generation like oil, gas, and coal increases the uncertainty in power systems operation. Additionally, the dynamics introduced by these renewable energy sources that are interfaced through converters are much faster than those in conventional system with thermal power plants.
This thesis investigates new operating tools for the system operator that are data-driven to help manage the increased operational uncertainty in this transition. The presented work aims to an- swer some open questions regarding the implementation of these machine learning approaches in real-time operation, primarily related to the quality of training data to train accurate machine- learned models for predicting dynamic behaviour, and the use of these machine-learned models in the control room for real-time operation.
To answer the first question, this thesis presents a novel sampling approach for generating ’rare’ operating conditions that are physically feasible but have not been experienced by power systems before. In so doing, the aim is to move away from historical observations that are often limited in describing the full range of operating conditions. Then, the thesis presents a novel approach based on Wasserstein distance and entropy to efficiently combine both historical and ’rare’ operating conditions to create an enriched database capable of training a high- performance classifier. To answer the second question, this thesis presents a scalable and rigorous workflow to trade-off multiple objective criteria when choosing decision tree models for real-time operation by system operators. Then, showcases a practical implementation for using a machine-learned model to optimise power system operation cost using topological control actions. Future research directions are underscored by the crucial role of machine learning in securing low inertia systems, and this thesis identifies research gaps covering physics-informed learning, machine learning-based network planning for secure operation, and robust training datasets are outlined.Open Acces
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
Resource Allocation Framework: Validation of Numerical Models of Complex Engineering Systems against Physical Experiments
An increasing reliance on complex numerical simulations for high consequence decision making is the motivation for experiment-based validation and uncertainty quantification to assess, and when needed, to improve the predictive capabilities of numerical models. Uncertainties and biases in model predictions can be reduced by taking two distinct actions: (i) increasing the number of experiments in the model calibration process, and/or (ii) improving the physics sophistication of the numerical model. Therefore, decision makers must select between further code development and experimentation while allocating the finite amount of available resources. This dissertation presents a novel framework to assist in this selection between experimentation and code development for model validation strictly from the perspective of predictive capability. The reduction and convergence of discrepancy bias between model prediction and observation, computed using a suitable convergence metric, play a key role in the conceptual formulation of the framework. The proposed framework is demonstrated using two non-trivial case study applications on the Preston-Tonks-Wallace (PTW) code, which is a continuum-based plasticity approach to modeling metals, and the ViscoPlastic Self-Consistent (VPSC) code which is a mesoscopic plasticity approach to modeling crystalline materials. Results show that the developed resource allocation framework is effective and efficient in path selection (i.e. experimentation and/or code development) resulting in a reduction in both model uncertainties and discrepancy bias. The framework developed herein goes beyond path selection in the validation of numerical models by providing a methodology for the prioritization of optimal experimental settings and an algorithm for prioritization of code development. If the path selection algorithm selects the experimental path, optimal selection of the settings at which these physical experiments are conducted as well as the sequence of these experiments is vital to maximize the gain in predictive capability of a model. The Batch Sequential Design (BSD) is a methodology utilized in this work to achieve the goal of selecting the optimal experimental settings. A new BSD selection criterion, Coverage Augmented Expected Improvement for Predictive Stability (C-EIPS), is developed to minimize the maximum reduction in the model discrepancy bias and coverage of the experiments within the domain of applicability. The functional form of the new criterion, C-EIPS, is demonstrated to outperform its predecessor, the EIPS criterion, and the distance-based criterion when discrepancy bias is high and coverage is low, while exhibiting a comparable performance to the distance-based criterion in efficiently maximizing the predictive capability of the VPSC model as discrepancy decreases and coverage increases. If the path selection algorithm selects the code development path, the developed framework provides an algorithm for the prioritization of code development efforts. In coupled systems, the predictive accuracy of the simulation hinges on the accuracy of individual constituent models. Potential improvement in the predictive accuracy of the simulation that can be gained through improving a constituent model depends not only on the relative importance, but also on the inherent uncertainty and inaccuracy of that particular constituent. As such, a unique and quantitative code prioritization index (CPI) is proposed to accomplish the task of prioritizing code development efforts, and its application is demonstrated on a case study of a steel frame with semi-rigid connections. Findings show that the CPI is effective in identifying the most critical constituent of the coupled system, whose improvement leads to the highest overall enhancement of the predictive capability of the coupled model
Unmanned Aerial Systems for Wildland and Forest Fires
Wildfires represent an important natural risk causing economic losses, human
death and important environmental damage. In recent years, we witness an
increase in fire intensity and frequency. Research has been conducted towards
the development of dedicated solutions for wildland and forest fire assistance
and fighting. Systems were proposed for the remote detection and tracking of
fires. These systems have shown improvements in the area of efficient data
collection and fire characterization within small scale environments. However,
wildfires cover large areas making some of the proposed ground-based systems
unsuitable for optimal coverage. To tackle this limitation, Unmanned Aerial
Systems (UAS) were proposed. UAS have proven to be useful due to their
maneuverability, allowing for the implementation of remote sensing, allocation
strategies and task planning. They can provide a low-cost alternative for the
prevention, detection and real-time support of firefighting. In this paper we
review previous work related to the use of UAS in wildfires. Onboard sensor
instruments, fire perception algorithms and coordination strategies are
considered. In addition, we present some of the recent frameworks proposing the
use of both aerial vehicles and Unmanned Ground Vehicles (UV) for a more
efficient wildland firefighting strategy at a larger scale.Comment: A recent published version of this paper is available at:
https://doi.org/10.3390/drones501001
Modelling and data validation for the energy analysis of absorption refrigeration systems
Data validation and reconciliation techniques have been extensively used in the process industry to improve the data accuracy. These techniques exploit the redundancy in the measurements in order to obtain a set of adjusted measurements that satisfy the plant model. Nevertheless, not many applications deal with closed cycles with complex connectivity and recycle loops, as in absorption refrigeration cycles. This thesis proposes a methodology for the steady-state data validation of absorption refrigeration systems. This methodology includes the identification of steady-state, resolution of the data reconciliation and parameter estimation problems and the detection and elimination of gross errors. The methodology developed through this thesis will be useful for generating a set of coherent measurements and operation parameters of an absorption chiller for downstream applications: performance calculation, development of empirical models, optimisation, etc. The methodology is demonstrated using experimental data of different types of absorption refrigeration systems with different levels of redundancy.Los procedimientos de validación y reconciliación de datos se han utilizado en la industria de procesos para mejorar la precisión de los datos. Estos procedimientos aprovechan la redundancia enlas mediciones para obtener un conjunto de datos ajustados que satisfacen el modelo de la planta. Sin embargo, no hay muchas aplicaciones que traten con ciclos cerrados, y configuraciones complejas, como los ciclos de refrigeración por absorción. Esta tesis propone una metodología para la validación de datos en estado estacionario de enfriadoras de absorción. Estametodología incluye la identificación del estado estacionario, la resolución de los problemas de reconciliación de datos y estimación de parámetrosy la detección de errores sistemáticos. Esta metodología será útil para generar un conjunto de medidas coherentes para aplicaciones como: cálculo de prestaciones, desarrollo de modelos empíricos, optimización, etc. La metodología es demostrada utilizando datos experimentales de diferentes enfriadoras de absorción, con diferentes niveles de redundancia
Recommended from our members
Statistical methods for rapid system evaluation under transient and permanent faults
textTraditional solutions for test and reliability do not scale well for modern designs with their size and complexity increasing with every technology generation. Therefore, in order to meet time-to-market requirements as well as acceptable product quality, it is imperative that new methodologies be developed for quickly evaluating a system in the presence of faults. In this research, statistical methods have been employed and implemented to 1) estimate the stuck-at fault coverage of a test sequence and evaluate the given test vector set without the need for complete fault simulation, and 2) analyze design vulnerabilities in the presence of radiation-based (soft) errors. Experimental results show that these statistical techniques can evaluate a system under test orders of magnitude faster than state-of-the-art methods with a small margin of error. In this dissertation, I have introduced novel methodologies that utilize the information from fault-free simulation and partial fault simulation to predict the fault coverage of a long sequence of test vectors for a design under test. These methodologies are practical for functional testing of complex designs under a long sequence of test vectors. Industry is currently seeking efficient solutions for this challenging problem. The last part of this dissertation discusses a statistical methodology for a detailed vulnerability analysis of systems under soft errors. This methodology works orders of magnitude faster than traditional fault injection. In addition, it is shown that the vulnerability factors calculated by this method are closer to complete fault injection (which is the ideal way of soft error vulnerability analysis), compared to statistical fault injection. Performing such a fast soft error vulnerability analysis is very cruicial for companies that design and build safety-critical systems.Electrical and Computer Engineerin
- …