10,553 research outputs found

    The latent state hazard model, with application to wind turbine reliability

    Full text link
    We present a new model for reliability analysis that is able to distinguish the latent internal vulnerability state of the equipment from the vulnerability caused by temporary external sources. Consider a wind farm where each turbine is running under the external effects of temperature, wind speed and direction, etc. The turbine might fail because of the external effects of a spike in temperature. If it does not fail during the temperature spike, it could still fail due to internal degradation, and the spike could cause (or be an indication of) this degradation. The ability to identify the underlying latent state can help better understand the effects of external sources and thus lead to more robust decision-making. We present an experimental study using SCADA sensor measurements from wind turbines in Italy.Comment: Published at http://dx.doi.org/10.1214/15-AOAS859 in the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Remaining useful life estimation in heterogeneous fleets working under variable operating conditions

    Get PDF
    The availability of condition monitoring data for large fleets of similar equipment motivates the development of data-driven prognostic approaches that capitalize on the information contained in such data to estimate equipment Remaining Useful Life (RUL). A main difficulty is that the fleet of equipment typically experiences different operating conditions, which influence both the condition monitoring data and the degradation processes that physically determine the RUL. We propose an approach for RUL estimation from heterogeneous fleet data based on three phases: firstly, the degradation levels (states) of an homogeneous discrete-time finite-state semi-markov model are identified by resorting to an unsupervised ensemble clustering approach. Then, the parameters of the discrete Weibull distributions describing the transitions among the states and their uncertainties are inferred by resorting to the Maximum Likelihood Estimation (MLE) method and to the Fisher Information Matrix (FIM), respectively. Finally, the inferred degradation model is used to estimate the RUL of fleet equipment by direct Monte Carlo (MC) simulation. The proposed approach is applied to two case studies regarding heterogeneous fleets of aluminium electrolytic capacitors and turbofan engines. Results show the effectiveness of the proposed approach in predicting the RUL and its superiority compared to a fuzzy similarity-based approach of literature

    Particle Filters for Remaining Useful Life Estimation of Abatement Equipment used in Semiconductor Manufacturing

    Get PDF
    Prognostics is the ability to predict the remaining useful life of a specific system, or component, and represents a key enabler of any effective condition-based-maintenance strategy. Among methods for performing prognostics such as regression and artificial neural networks, particle filters are emerging as a technique with considerable potential. Particle filters employ both a state dynamic model and a measurement model, which are used together to predict the evolution of the state probability distribution function. The approach has similarities to Kalman filtering, however, particle filters make no assumptions that the state dynamic model be linear or that Gaussian noise assumptions must hold true. The technique is applied in predicting the degradation of thermal processing units used in the treatment of waste gases from semiconductor processing chambers. The performance of the technique demonstrates the potential of particle filters as a robust method for accurately predicting system failure. In addition to the use of particle filters, Gaussian Mixture Models (GMM) are employed to extract signals associated with the different operating modes from a multi-modal signal generated by the operating characteristics of the thermal processing unit

    Prognostic Algorithms for Condition Monitoring and Remaining Useful Life Estimation

    Get PDF
    To enable the benets of a truly condition-based maintenance philosophy to be realised, robust, accurate and reliable algorithms, which provide maintenance personnel with the necessary information to make informed maintenance decisions, will be key. This thesis focuses on the development of such algorithms, with a focus on semiconductor manufacturing and wind turbines. An introduction to condition-based maintenance is presented which reviews dierent types of maintenance philosophies and describes the potential benets which a condition- based maintenance philosophy will deliver to operators of critical plant and machinery. The issues and challenges involved in developing condition-based maintenance solutions are discussed and a review of previous approaches and techniques in fault diagnostics and prognostics is presented. The development of a condition monitoring system for dry vacuum pumps used in semi- conductor manufacturing is presented. A notable feature is that upstream process mea- surements from the wafer processing chamber were incorporated in the development of a solution. In general, semiconductor manufacturers do not make such information avail- able and this study identies the benets of information sharing in the development of condition monitoring solutions, within the semiconductor manufacturing domain. The developed solution provides maintenance personnel with the ability to identify, quantify, track and predict the remaining useful life of pumps suering from degradation caused by pumping large volumes of corrosive uorine gas. A comprehensive condition monitoring solution for thermal abatement systems is also presented. As part of this work, a multiple model particle ltering algorithm for prog- nostics is developed and tested. The capabilities of the proposed prognostic solution for addressing the uncertainty challenges in predicting the remaining useful life of abatement systems, subject to uncertain future operating loads and conditions, is demonstrated. Finally, a condition monitoring algorithm for the main bearing on large utility scale wind turbines is developed. The developed solution exploits data collected by onboard supervisory control and data acquisition (SCADA) systems in wind turbines. As a result, the developed solution can be integrated into existing monitoring systems, at no additional cost. The potential for the application of multiple model particle ltering algorithm to wind turbine prognostics is also demonstrated

    Real-time Loss Estimation for Instrumented Buildings

    Get PDF
    Motivation. A growing number of buildings have been instrumented to measure and record earthquake motions and to transmit these records to seismic-network data centers to be archived and disseminated for research purposes. At the same time, sensors are growing smaller, less expensive to install, and capable of sensing and transmitting other environmental parameters in addition to acceleration. Finally, recently developed performance-based earthquake engineering methodologies employ structural-response information to estimate probabilistic repair costs, repair durations, and other metrics of seismic performance. The opportunity presents itself therefore to combine these developments into the capability to estimate automatically in near-real-time the probabilistic seismic performance of an instrumented building, shortly after the cessation of strong motion. We refer to this opportunity as (near-) real-time loss estimation (RTLE). Methodology. This report presents a methodology for RTLE for instrumented buildings. Seismic performance is to be measured in terms of probabilistic repair cost, precise location of likely physical damage, operability, and life-safety. The methodology uses the instrument recordings and a Bayesian state-estimation algorithm called a particle filter to estimate the probabilistic structural response of the system, in terms of member forces and deformations. The structural response estimate is then used as input to component fragility functions to estimate the probabilistic damage state of structural and nonstructural components. The probabilistic damage state can be used to direct structural engineers to likely locations of physical damage, even if they are concealed behind architectural finishes. The damage state is used with construction cost-estimation principles to estimate probabilistic repair cost. It is also used as input to a quantified, fuzzy-set version of the FEMA-356 performance-level descriptions to estimate probabilistic safety and operability levels. CUREE demonstration building. The procedure for estimating damage locations, repair costs, and post-earthquake safety and operability is illustrated in parallel demonstrations by CUREE and Kajima research teams. The CUREE demonstration is performed using a real 1960s-era, 7-story, nonductile reinforced-concrete moment-frame building located in Van Nuys, California. The building is instrumented with 16 channels at five levels: ground level, floors 2, 3, 6, and the roof. We used the records obtained after the 1994 Northridge earthquake to hindcast performance in that earthquake. The building is analyzed in its condition prior to the 1994 Northridge Earthquake. It is found that, while hindcasting of the overall system performance level was excellent, prediction of detailed damage locations was poor, implying that either actual conditions differed substantially from those shown on the structural drawings, or inappropriate fragility functions were employed, or both. We also found that Bayesian updating of the structural model using observed structural response above the base of the building adds little information to the performance prediction. The reason is probably that Real-Time Loss Estimation for Instrumented Buildings ii structural uncertainties have only secondary effect on performance uncertainty, compared with the uncertainty in assembly damageability as quantified by their fragility functions. The implication is that real-time loss estimation is not sensitive to structural uncertainties (saving costly multiple simulations of structural response), and that real-time loss estimation does not benefit significantly from installing measuring instruments other than those at the base of the building. Kajima demonstration building. The Kajima demonstration is performed using a real 1960s-era office building in Kobe, Japan. The building, a 7-story reinforced-concrete shearwall building, was not instrumented in the 1995 Kobe earthquake, so instrument recordings are simulated. The building is analyzed in its condition prior to the earthquake. It is found that, while hindcasting of the overall repair cost was excellent, prediction of detailed damage locations was poor, again implying either that as-built conditions differ substantially from those shown on structural drawings, or that inappropriate fragility functions were used, or both. We find that the parameters of the detailed particle filter needed significant tuning, which would be impractical in actual application. Work is needed to prescribe values of these parameters in general. Opportunities for implementation and further research. Because much of the cost of applying this RTLE algorithm results from the cost of instrumentation and the effort of setting up a structural model, the readiest application would be to instrumented buildings whose structural models are already available, and to apply the methodology to important facilities. It would be useful to study under what conditions RTLE would be economically justified. Two other interesting possibilities for further study are (1) to update performance using readily observable damage; and (2) to quantify the value of information for expensive inspections, e.g., if one inspects a connection with a modeled 50% failure probability and finds that the connect is undamaged, is it necessary to examine one with 10% failure probability

    A multi-scale, multi-wavelength source extraction method: getsources

    Full text link
    We present a multi-scale, multi-wavelength source extraction algorithm called getsources. Although it has been designed primarily for use in the far-infrared surveys of Galactic star-forming regions with Herschel, the method can be applied to many other astronomical images. Instead of the traditional approach of extracting sources in the observed images, the new method analyzes fine spatial decompositions of original images across a wide range of scales and across all wavebands. It cleans those single-scale images of noise and background, and constructs wavelength-independent single-scale detection images that preserve information in both spatial and wavelength dimensions. Sources are detected in the combined detection images by following the evolution of their segmentation masks across all spatial scales. Measurements of the source properties are done in the original background-subtracted images at each wavelength; the background is estimated by interpolation under the source footprints and overlapping sources are deblended in an iterative procedure. In addition to the main catalog of sources, various catalogs and images are produced that aid scientific exploitation of the extraction results. We illustrate the performance of getsources on Herschel images by extracting sources in sub-fields of the Aquila and Rosette star-forming regions. The source extraction code and validation images with a reference extraction catalog are freely available.Comment: 31 pages, 27 figures, to be published in Astronomy & Astrophysic
    • …
    corecore