1,010 research outputs found

    Depth estimation of inner wall defects by means of infrared thermography

    Get PDF
    There two common methods dealing with interpreting data from infrared thermography: qualitatively and quantitatively. On a certain condition, the first method would be sufficient, but for an accurate interpretation, one should undergo the second one. This report proposes a method to estimate the defect depth quantitatively at an inner wall of petrochemical furnace wall. Finite element method (FEM) is used to model multilayer walls and to simulate temperature distribution due to the existence of the defect. Five informative parameters are proposed for depth estimation purpose. These parameters are the maximum temperature over the defect area (Tmax-def), the average temperature at the right edge of the defect (Tavg-right), the average temperature at the left edge of the defect (Tavg-left), the average temperature at the top edge of the defect (Tavg-top), and the average temperature over the sound area (Tavg-so). Artificial Neural Network (ANN) was trained with these parameters for estimating the defect depth. Two ANN architectures, Multi Layer Perceptron (MLP) and Radial Basis Function (RBF) network were trained for various defect depths. ANNs were used to estimate the controlled and testing data. The result shows that 100% accuracy of depth estimation was achieved for the controlled data. For the testing data, the accuracy was above 90% for the MLP network and above 80% for the RBF network. The results showed that the proposed informative parameters are useful for the estimation of defect depth and it is also clear that ANN can be used for quantitative interpretation of thermography data

    Failure mode prediction and energy forecasting of PV plants to assist dynamic maintenance tasks by ANN based models

    Get PDF
    In the field of renewable energy, reliability analysis techniques combining the operating time of the system with the observation of operational and environmental conditions, are gaining importance over time. In this paper, reliability models are adapted to incorporate monitoring data on operating assets, as well as information on their environmental conditions, in their calculations. To that end, a logical decision tool based on two artificial neural networks models is presented. This tool allows updating assets reliability analysis according to changes in operational and/or environmental conditions. The proposed tool could easily be automated within a supervisory control and data acquisition system, where reference values and corresponding warnings and alarms could be now dynamically generated using the tool. Thanks to this capability, on-line diagnosis and/or potential asset degradation prediction can be certainly improved. Reliability models in the tool presented are developed according to the available amount of failure data and are used for early detection of degradation in energy production due to power inverter and solar trackers functional failures. Another capability of the tool presented in the paper is to assess the economic risk associated with the system under existing conditions and for a certain period of time. This information can then also be used to trigger preventive maintenance activities

    Review and Comparison of Intelligent Optimization Modelling Techniques for Energy Forecasting and Condition-Based Maintenance in PV Plants

    Get PDF
    Within the field of soft computing, intelligent optimization modelling techniques include various major techniques in artificial intelligence. These techniques pretend to generate new business knowledge transforming sets of "raw data" into business value. One of the principal applications of these techniques is related to the design of predictive analytics for the improvement of advanced CBM (condition-based maintenance) strategies and energy production forecasting. These advanced techniques can be used to transform control system data, operational data and maintenance event data to failure diagnostic and prognostic knowledge and, ultimately, to derive expected energy generation. One of the systems where these techniques can be applied with massive potential impact are the legacy monitoring systems existing in solar PV energy generation plants. These systems produce a great amount of data over time, while at the same time they demand an important e ort in order to increase their performance through the use of more accurate predictive analytics to reduce production losses having a direct impact on ROI. How to choose the most suitable techniques to apply is one of the problems to address. This paper presents a review and a comparative analysis of six intelligent optimization modelling techniques, which have been applied on a PV plant case study, using the energy production forecast as the decision variable. The methodology proposed not only pretends to elicit the most accurate solution but also validates the results, in comparison with the di erent outputs for the di erent techniques

    Radial Basis Functions Network for Defect Sizing

    Get PDF
    An important aspect of non-destructive testing is the interpretation and classification of signal obtained by NDT methods such as eddy current and ultrasound. These signals are typically complex, non-stationary waveforms, with signals corresponding to a particular class of defect in a specimen having similar form and shape. However, distortions and noise introduced by the measurement system make the manual classification of these signals a time-consuming and unreliable process, with the results affected by operator fatigue and measurement quality. The design of traditional classifiers for this task also poses many difficulties, due to a number of parameters that influence measurement, and the limited understanding of the effect of these parameters on the signal. Recently, artificial neural networks have been applied to a variety of NDT problems, including signal classification, with encouraging results. Artificial neural networks consist of a dense interconnection of simple computational elements, whose interconnection strengths are determined using a predefined learning algorithm, specific to the network. These networks do not require an explicit mathematical modeling of the data they have to process, and are robust even in the presence of noisy data and data generated by strongly non-linear processes [1]. An example of a neural network that has been extensively used in NDT applications is the multilayer perception. However, the error backpropagation algorithm used for training the multilayer perceptron has several disadvantages, such as long training times and susceptibility to local minima. This paper presents a novel approach to defect sizing that involves the use of a radial basis functions network. The network has the advantages of having shorter training times and a parametric nature that allows network optimization on an analytic basis. The application of such a network in the inversion of ultrasonic data to obtain flaw sizing is described. Results from the sizing of defects in aluminium blocks are presented

    Generalization from correlated sets of patterns in the perceptron

    Full text link
    Generalization is a central aspect of learning theory. Here, we propose a framework that explores an auxiliary task-dependent notion of generalization, and attempts to quantitatively answer the following question: given two sets of patterns with a given degree of dissimilarity, how easily will a network be able to "unify" their interpretation? This is quantified by the volume of the configurations of synaptic weights that classify the two sets in a similar manner. To show the applicability of our idea in a concrete setting, we compute this quantity for the perceptron, a simple binary classifier, using the classical statistical physics approach in the replica-symmetric ansatz. In this case, we show how an analytical expression measures the "distance-based capacity", the maximum load of patterns sustainable by the network, at fixed dissimilarity between patterns and fixed allowed number of errors. This curve indicates that generalization is possible at any distance, but with decreasing capacity. We propose that a distance-based definition of generalization may be useful in numerical experiments with real-world neural networks, and to explore computationally sub-dominant sets of synaptic solutions

    Predicting Hospital Length of Stay in Intensive Care Unit

    Get PDF
    In this thesis, we investigate the performance of a series of classification methods for the Prediction of the hospital Length of Stay (LoS) in Intensive Care Unit (ICU). Predicting LOS for an inpatient in an hospital is a challenging task but is essential for the operational success of a hospital. Since hospitals are faced with severely limited resources including beds to hold admitted patients, prediction of LoS will assist the hospital staff for better planning and management of hospital resources. The goal of this project is to create a machine learning model that predicts the length-of stay for each patient at the time of admission. MIMIC-III database has been used for this project due to detailed information it contains about ICU stays. MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising de-identified health data associated with ~40,000 critical care patients at Beth Israel Deaconess Medical Centre. It includes demographics, vital signs, laboratory tests, medications, and more. Different machine learning techniques/classifiers have been investigated in this thesis. We experimented with regression models as well as classification models with different classes of varying granularity as target for LoS prediction. It turned out that granular classes (in small unit of days) work better than regression models trying to predict exact duration in days and hours. The overall performance of our classifiers was ranging from fair to very good and has been discussed in the results. Secondly, we also experimented with building separate LoS prediction models built for patients with different disease conditions and compared it to the joint model built for all patients

    Predicting Hospital Length of Stay in Intensive Care Unit

    Get PDF
    In this thesis, we investigate the performance of a series of classification methods for the Prediction of the hospital Length of Stay (LoS) in Intensive Care Unit (ICU). Predicting LOS for an inpatient in an hospital is a challenging task but is essential for the operational success of a hospital. Since hospitals are faced with severely limited resources including beds to hold admitted patients, prediction of LoS will assist the hospital staff for better planning and management of hospital resources. The goal of this project is to create a machine learning model that predicts the length-of stay for each patient at the time of admission. MIMIC-III database has been used for this project due to detailed information it contains about ICU stays. MIMIC is an openly available dataset developed by the MIT Lab for Computational Physiology, comprising de-identified health data associated with ~40,000 critical care patients at Beth Israel Deaconess Medical Centre. It includes demographics, vital signs, laboratory tests, medications, and more. Different machine learning techniques/classifiers have been investigated in this thesis. We experimented with regression models as well as classification models with different classes of varying granularity as target for LoS prediction. It turned out that granular classes (in small unit of days) work better than regression models trying to predict exact duration in days and hours. The overall performance of our classifiers was ranging from fair to very good and has been discussed in the results. Secondly, we also experimented with building separate LoS prediction models built for patients with different disease conditions and compared it to the joint model built for all patients
    corecore