783 research outputs found
Advanced methodologies for reliability-based design optimization and structural health prognostics
Failures of engineered systems can lead to significant economic and societal losses. To minimize the losses, reliability must be ensured throughout the system's lifecycle in the presence of manufacturing variability and uncertain operational conditions. Many reliability-based design optimization (RBDO) techniques have been developed to ensure high reliability of engineered system design under manufacturing variability. Schedule-based maintenance, although expensive, has been a popular method to maintain highly reliable engineered systems under uncertain operational conditions. However, so far there is no cost-effective and systematic approach to ensure high reliability of engineered systems throughout their lifecycles while accounting for both the manufacturing variability and uncertain operational conditions.
Inspired by an intrinsic ability of systems in ecology, economics, and other fields that is able to proactively adjust their functioning to avoid potential system failures, this dissertation attempts to adaptively manage engineered system reliability during its lifecycle by advancing two essential and co-related research areas: system RBDO and prognostics and health management (PHM). System RBDO ensures high reliability of an engineered system in the early design stage, whereas capitalizing on PHM technology enables the system to proactively avoid failures in its operation stage. Extensive literature reviews in these areas have identified four key research issues: (1) how system failure modes and their interactions can be analyzed in a statistical sense; (2) how limited data for input manufacturing variability can be used for RBDO; (3) how sensor networks can be designed to effectively monitor system health degradation under highly uncertain operational conditions; and (4) how accurate and timely remaining useful lives of systems can be predicted under highly uncertain operational conditions. To properly address these key research issues, this dissertation lays out four research thrusts in the following chapters: Chapter 3 - Complementary Intersection Method for System Reliability Analysis, Chapter 4 - Bayesian Approach to RBDO, Chapter 5 - Sensing Function Design for Structural Health Prognostics, and Chapter 6 - A Generic Framework for Structural Health Prognostics. Multiple engineering case studies are presented to demonstrate the feasibility and effectiveness of the proposed RBDO and PHM techniques for ensuring and improving the reliability of engineered systems within their lifecycles
Distributed estimation over a low-cost sensor network: a review of state-of-the-art
Proliferation of low-cost, lightweight, and power efficient sensors and advances in networked systems enable the employment of multiple sensors. Distributed estimation provides a scalable and fault-robust fusion framework with a peer-to-peer communication architecture. For this reason, there seems to be a real need for a critical review of existing and, more importantly, recent advances in the domain of distributed estimation over a low-cost sensor network. This paper presents a comprehensive review of the state-of-the-art solutions in this research area, exploring their characteristics, advantages, and challenging issues. Additionally, several open problems and future avenues of research are highlighted
Efficient Detection on Stochastic Faults in PLC Based Automated Assembly Systems With Novel Sensor Deployment and Diagnoser Design
In this dissertation, we proposed solutions on novel sensor deployment and diagnoser design to efficiently detect stochastic faults in PLC based automated systems
First, a fuzzy quantitative graph based sensor deployment was called upon to model cause-effect relationship between faults and sensors. Analytic hierarchy process (AHP) was used to aggregate the heterogeneous properties between sensors and faults into single edge values in fuzzy graph, thus quantitatively determining the fault detectability. An appropriate multiple objective model was set up to minimize fault unobservability and cost while achieving required detectability performance. Lexicographical mixed integer linear programming and greedy search were respectively used to optimize the model, thus assigning the sensors to faults.
Second, a diagnoser based on real time fuzzy Petri net (RTFPN) was proposed to detect faults in discrete manufacturing systems. It used the real time PN to model the manufacturing plant while using fuzzy PN to isolate the faults. It has the capability of handling uncertainties and including industry knowledge to diagnose faults. The proposed approach was implemented using Visual Basic, and tested as well as validated on a dual robot arm.
Finally, the proposed sensor deployment approach and diagnoser were comprehensively evaluated based on design of experiment techniques. Two-stage statistical analysis including analysis of variance (ANOVA) and least significance difference (LSD) were conducted to evaluate the diagnosis performance including positive detection rate, false alarm, accuracy and detect delay. It illustrated the proposed approaches have better performance on those evaluation metrics.
The major contributions of this research include the following aspects: (1) a novel fuzzy quantitative graph based sensor deployment approach handling sensor heterogeneity, and optimizing multiple objectives based on lexicographical integer linear programming and greedy algorithm, respectively. A case study on a five tank system showed that system detectability was improved from the approach of signed directed graph's 0.62 to the proposed approach's 0.70. The other case study on a dual robot arm also show improvement on system's detectability improved from the approach of signed directed graph's 0.61 to the proposed approach's 0.65. (2) A novel real time fuzzy Petri net diagnoser was used to remedy nonsynchronization and integrate useful but incomplete knowledge for diagnosis purpose. The third case study on a dual robot arm shows that the diagnoser can achieve a high detection accuracy of 93% and maximum detection delay of eight steps. (3) The comprehensive evaluation approach can be referenced by other diagnosis systems' design, optimization and evaluation
A Survey on Multisensor Fusion and Consensus Filtering for Sensor Networks
Multisensor fusion and consensus filtering are two fascinating subjects in the research of sensor networks. In this survey, we will cover both classic results and recent advances developed in these two topics. First, we recall some important results in the development ofmultisensor fusion technology. Particularly, we pay great attention to the fusion with unknown correlations, which ubiquitously exist in most of distributed filtering problems. Next, we give a systematic review on several widely used consensus filtering approaches. Furthermore, some latest progress on multisensor fusion and consensus filtering is also presented. Finally,
conclusions are drawn and several potential future research directions are outlined.the Royal Society of the UK, the National Natural Science Foundation of China under Grants 61329301, 61374039, 61304010, 11301118, and 61573246, the Hujiang Foundation of China under Grants C14002
and D15009, the Alexander von Humboldt Foundation of Germany, and the Innovation Fund Project for Graduate Student of Shanghai under Grant JWCXSL140
Modelling offshore wind farm operation and maintenance with view to estimating the benefits of condition monitoring
Offshore wind energy is progressing rapidly and playing an increasingly important role in electricity generation. Since the Kyoto Protocol in February 2005, Europe has been substantially increasing its installed wind capacity. Compared to onshore wind, offshore wind allows the installation of larger turbines, more extensive sites, and encounters higher wind speed with lower turbulence. On the other hand, harsh marine conditions and the limited access to the turbines are expected to increase the cost of operation and maintenance (O&M costs presently make up approximately 20-25% of the levelised total lifetime cost of a wind turbine). Efficient condition monitoring has the potential to reduce O&M costs. In the analysis of the cost effectiveness of condition monitoring, cost and operational data are crucial. Regrettably, wind farm operational data are generally kept confidential by manufacturers and wind farm operators, especially for the offshore ones.To facilitate progress, this thesis has investigated accessible SCADA and failure data from a large onshore wind farm and created a series of indirect analysis methods to overcome the data shortage including an onshore/offshore failure rate translator and a series of methods to distinguish yawing errors from wind turbine nacelle direction sensor errors. Wind turbine component reliability has been investigated by using this innovative component failure rate translation from onshore to offshore, and applies the translation technique to Failure Mode and Effect Analysis for offshore wind. An existing O&M cost model has been further developed and then compared to other available cost models. It is demonstrated that the improvements made to the model (including the data translation approach) have improved the applicability and reliability of the model. The extended cost model (called StraPCost+) has been used to establish a relationship between the effectiveness of reactive and condition-based maintenance strategies. The benchmarked cost model has then been applied to assess the O&M cost effectiveness for three offshore wind farms at different operational phases.Apart from the innovative methodologies developed, this thesis also provides detailed background and understanding of the state of the art for offshore wind technology, condition monitoring technology. The methodology of cost model developed in this thesis is presented in detail and compared with other cost models in both commercial and research domains.Offshore wind energy is progressing rapidly and playing an increasingly important role in electricity generation. Since the Kyoto Protocol in February 2005, Europe has been substantially increasing its installed wind capacity. Compared to onshore wind, offshore wind allows the installation of larger turbines, more extensive sites, and encounters higher wind speed with lower turbulence. On the other hand, harsh marine conditions and the limited access to the turbines are expected to increase the cost of operation and maintenance (O&M costs presently make up approximately 20-25% of the levelised total lifetime cost of a wind turbine). Efficient condition monitoring has the potential to reduce O&M costs. In the analysis of the cost effectiveness of condition monitoring, cost and operational data are crucial. Regrettably, wind farm operational data are generally kept confidential by manufacturers and wind farm operators, especially for the offshore ones.To facilitate progress, this thesis has investigated accessible SCADA and failure data from a large onshore wind farm and created a series of indirect analysis methods to overcome the data shortage including an onshore/offshore failure rate translator and a series of methods to distinguish yawing errors from wind turbine nacelle direction sensor errors. Wind turbine component reliability has been investigated by using this innovative component failure rate translation from onshore to offshore, and applies the translation technique to Failure Mode and Effect Analysis for offshore wind. An existing O&M cost model has been further developed and then compared to other available cost models. It is demonstrated that the improvements made to the model (including the data translation approach) have improved the applicability and reliability of the model. The extended cost model (called StraPCost+) has been used to establish a relationship between the effectiveness of reactive and condition-based maintenance strategies. The benchmarked cost model has then been applied to assess the O&M cost effectiveness for three offshore wind farms at different operational phases.Apart from the innovative methodologies developed, this thesis also provides detailed background and understanding of the state of the art for offshore wind technology, condition monitoring technology. The methodology of cost model developed in this thesis is presented in detail and compared with other cost models in both commercial and research domains
Recommended from our members
Enabling Resilience in Cyber-Physical-Human Water Infrastructures
Rapid urbanization and growth in urban populations have forced community-scale infrastructures (e.g., water, power and natural gas distribution systems, and transportation networks) to operate at their limits. Aging (and failing) infrastructures around the world are becoming increasingly vulnerable to operational degradation, extreme weather, natural disasters and cyber attacks/failures. These trends have wide-ranging socioeconomic consequences and raise public safety concerns. In this thesis, we introduce the notion of cyber-physical-human infrastructures (CPHIs) - smart community-scale infrastructures that bridge technologies with physical infrastructures and people. CPHIs are highly dynamic stochastic systems characterized by complex physical models that exhibit regionwide variability and uncertainty under disruptions. Failures in these distributed settings tend to be difficult to predict and estimate, and expensive to repair. Real-time fault identification is crucial to ensure continuity of lifeline services to customers at adequate levels of quality. Emerging smart community technologies have the potential to transform our failing infrastructures into robust and resilient future CPHIs.In this thesis, we explore one such CPHI - community water infrastructures. Current urban water infrastructures, that are decades (sometimes over a 100 years) old, encompass diverse geophysical regimes. Water stress concerns include the scarcity of supply and an increase in demand due to urbanization. Deterioration and damage to the infrastructure can disrupt water service; contamination events can result in economic and public health consequences. Unfortunately, little investment has gone into modernizing this key lifeline.To enhance the resilience of water systems, we propose an integrated middleware framework for quick and accurate identification of failures in complex water networks that exhibit uncertain behavior. Our proposed approach integrates IoT-based sensing, domain-specific models and simulations with machine learning methods to identify failures (pipe breaks, contamination events). The composition of techniques results in cost-accuracy-latency tradeoffs in fault identification, inherent in CPHIs due to the constraints imposed by cyber components, physical mechanics and human operators. Three key resilience problems are addressed in this thesis; isolation of multiple faults under a small number of failures, state estimation of the water systems under extreme events such as earthquakes, and contaminant source identification in water networks using human-in-the-loop based sensing. By working with real world water agencies (WSSC, DC and LADWP, LA), we first develop an understanding of operations of water CPHI systems. We design and implement a sensor-simulation-data integration framework AquaSCALE, and apply it to localize multiple concurrent pipe failures. We use a mixture of infrastructure measurements (i.e., historical and live water pressure/flow), environmental data (i.e., weather) and human inputs (i.e., twitter feeds), combined and enhanced with the domain model and supervised learning techniques to locate multiple failures at fine levels of granularity (individual pipeline level) with detection time reduced by orders of magnitude (from hours/days to minutes). We next consider the resilience of water infrastructures under extreme events (i.e., earthquakes) - the challenge here is the lack of apriori knowledge and the increased number and severity of damages to infrastructures. We present a graphical model based approach for efficient online state estimation, where the offline graph factorization partitions a given network into disjoint subgraphs, and the belief propagation based inference is executed on-the-fly in a distributed manner on those subgraphs. Our proposed approach can isolate 80% broken pipes and 99% loss-of-service to end-users during an earthquake.Finally, we address issues of water quality - today this is a human-in-the-loop process where operators need to gather water samples for lab tests. We incorporate the necessary abstractions with event processing methods into a workflow, which iteratively selects and refines the set of potential failure points via human-driven grab sampling. Our approach utilizes Hidden Markov Model based representations for event inference, along with reinforcement learning methods for further refining event locations and reducing the cost of human efforts.The proposed techniques are integrated into a middleware architecture, which enables components to communicate/collaborate with one another. We validate our approaches through a prototype implementation with multiple real-world water networks, supply-demand patterns from water utilities and policies set by the U.S. EPA. While our focus here is on water infrastructures in a community, the developed end-to-end solution is applicable to other infrastructures and community services which operate in disruptive and resource-constrained environments
GENERALIZED DISTRIBUTED CONSENSUS-BASED ALGORITHMS FOR UNCERTAIN SYSTEMS AND NETWORKS
We address four problems related to multi-agent optimization, filtering and agreement. First, we investigate collaborative optimization of an objective function expressed as a sum of local convex functions, when the agents make decisions in a distributed manner using local information, while the communication topology used to exchange messages and information is modeled by a graph-valued random process, assumed independent and identically distributed. Specifically, we study the performance of the consensusbased multi-agent distributed subgradient method and show how it depends on the probability distribution of the random graph. For the case of a constant stepsize, we first give
an upper bound on the difference between the objective function, evaluated at the agents' estimates of the optimal decision vector, and the optimal value. In addition, for a particular
class of convex functions, we give an upper bound on the distances between the agents' estimates of the optimal decision vector and the minimizer and we provide the rate of convergence to zero of the time varying component of the aforementioned upper
bound. The addressed metrics are evaluated via their expected values. As an application, we show how the distributed optimization algorithm can be used to perform collaborative system identification and provide numerical experiments under the randomized and broadcast gossip protocols.
Second, we generalize the asymptotic consensus problem to convex metric spaces. Under minimal connectivity assumptions, we show that if at each iteration an agent updates its state by choosing a point from a particular subset of the generalized convex hull generated by the agents current state and the states of its neighbors, then agreement is achieved asymptotically. In addition, we give bounds on the distance between the consensus point(s) and the initial values of the agents. As an application example, we introduce a probabilistic algorithm for reaching consensus of opinion and show that it in fact fits our general framework.
Third, we discuss the linear asymptotic consensus problem for a network of dynamic agents whose communication network is modeled by a randomly switching graph. The switching is determined by a finite state, Markov process, each topology corresponding to a state of the process. We address both the cases where the dynamics of the agents are expressed in continuous and discrete time. We show that, if the consensus matrices are doubly stochastic, average consensus is achieved in the mean square and almost sure senses if and only if the graph resulting from the union of graphs corresponding to the states of the Markov process is strongly connected.
Fourth, we address the consensus-based distributed linear filtering problem, where a discrete time, linear stochastic process is observed by a network of sensors. We assume that the consensus weights are known and we first provide sufficient conditions under
which the stochastic process is detectable, i.e. for a specific choice of consensus weights there exists a set of filtering gains such that the dynamics of the estimation errors (without noise) are asymptotically stable. Next, we develop a distributed, sub-optimal filtering scheme based on minimizing an upper bound on a quadratic filtering cost. In the stationary case, we provide sufficient conditions under which this scheme converges; conditions
expressed in terms of the convergence properties of a set of coupled Riccati equations. We continue by presenting a connection between the consensus-based distributed linear filter and the optimal linear filter of a Markovian jump linear system, appropriately defined. More specifically, we show that if the Markovian jump linear system is (mean square) detectable, then the stochastic process is detectable under the consensus-based distributed linear filtering scheme. We also show that the optimal gains of a linear filter for estimating the state of a Markovian jump linear system, appropriately defined, can be used to approximate the optimal gains of the consensus-based linear filter
- …