26,468 research outputs found

    Pitting damage levels estimation for planetary gear sets based on model simulation and grey relational analysis

    Get PDF
    The planetary gearbox is a critical mechanism in helicopter transmission systems. Tooth failures in planetary gear sets will cause great risk to helicopter operations. A gear pitting damage level estimation methodology has been devised in this paper by integrating a physical model for simulation signal generation, a three-step statistic algorithm for feature selection and damage level estimation for grey relational analysis. The proposed method was calibrated firstly with fault seeded test data and then validated with the data of other tests from a planetary gear set. The estimation results of test data coincide with the actual test records, showing the effectiveness and accuracy of the method in providing a novel way to model based methods and feature selection and weighting methods for more accurate health monitoring and condition prediction

    Assessing the effects of power quality on partial discharge behaviour through machine learning

    Get PDF
    Partial discharge (PD) is commonly used as an indicator of insulation health in high voltage equipment, but research has indicated that power quality, particularly harmonics, can strongly influence the discharge behaviour and the corresponding pattern observed. Unacknowledged variation in harmonics of the excitation voltage waveform can influence the insulation's degradation, leading to possible misinterpretation of diagnostic data and erroneous estimates of the insulation's ageing state, thus resulting in inappropriate asset management decisions. This paper reports on a suite of classifiers for identifying pertinent harmonic attributes from PD data, and presents results of techniques for improving their accuracy. Aspects of PD field monitoring are used to design a practical system for on-line monitoring of voltage harmonics. This system yields a report on the harmonics experienced during the monitoring period

    Time-efficient fault detection and diagnosis system for analog circuits

    Get PDF
    Time-efficient fault analysis and diagnosis of analog circuits are the most important prerequisites to achieve online health monitoring of electronic equipments, which are involving continuing challenges of ultra-large-scale integration, component tolerance, limited test points but multiple faults. This work reports an FPGA (field programmable gate array)-based analog fault diagnostic system by applying two-dimensional information fusion, two-port network analysis and interval math theory. The proposed system has three advantages over traditional ones. First, it possesses high processing speed and smart circuit size as the embedded algorithms execute parallel on FPGA. Second, the hardware structure has a good compatibility with other diagnostic algorithms. Third, the equipped Ethernet interface enhances its flexibility for remote monitoring and controlling. The experimental results obtained from two realistic example circuits indicate that the proposed methodology had yielded competitive performance in both diagnosis accuracy and time-effectiveness, with about 96% accuracy while within 60 ms computational time.Peer reviewedFinal Published versio

    A Lightweight N-Cover Algorithm For Diagnostic Fail Data Minimization

    Get PDF
    The increasing design complexity of modern ICs has made it extremely difficult and expensive to test them comprehensively. As the transistor count and density of circuits increase, a large volume of fail data is collected by the tester for a single failing IC. The diagnosis procedure analyzes this fail data to give valuable information about the possible defects that may have caused the circuit to fail. However, without any feedback from the diagnosis procedure, the tester may often collect fail data which is potentially not useful for identifying the defects in the failing circuit. This not only consumes tester memory but also increases tester data logging time and diagnosis run time. In this work, we present an algorithm to minimize the amount of fail data used for high quality diagnosis of the failing ICs. The developed algorithm analyzes outputs at which the tests failed and determines which failing tests can be eliminated from the fail data without compromising diagnosis accuracy. The proposed algorithm is used as a preprocessing step between the tester data logs and the diagnosis procedure. The performance of the algorithm was evaluated using fail data from industry manufactured ICs. Experiments demonstrate that on average, 43% of fail data was eliminated by our algorithm while maintaining an average diagnosis accuracy of 93%. With this reduction in fail data, the diagnosis speed was also increased by 46%

    What Causes My Test Alarm? Automatic Cause Analysis for Test Alarms in System and Integration Testing

    Full text link
    Driven by new software development processes and testing in clouds, system and integration testing nowadays tends to produce enormous number of alarms. Such test alarms lay an almost unbearable burden on software testing engineers who have to manually analyze the causes of these alarms. The causes are critical because they decide which stakeholders are responsible to fix the bugs detected during the testing. In this paper, we present a novel approach that aims to relieve the burden by automating the procedure. Our approach, called Cause Analysis Model, exploits information retrieval techniques to efficiently infer test alarm causes based on test logs. We have developed a prototype and evaluated our tool on two industrial datasets with more than 14,000 test alarms. Experiments on the two datasets show that our tool achieves an accuracy of 58.3% and 65.8%, respectively, which outperforms the baseline algorithms by up to 13.3%. Our algorithm is also extremely efficient, spending about 0.1s per cause analysis. Due to the attractive experimental results, our industrial partner, a leading information and communication technology company in the world, has deployed the tool and it achieves an average accuracy of 72% after two months of running, nearly three times more accurate than a previous strategy based on regular expressions.Comment: 12 page

    Multivariate sample similarity measure for feature selection with a resemblance model

    Get PDF
    Feature selection improves the classification performance of machine learning models. It also identifies the important features and eliminates those with little significance. Furthermore, feature selection reduces the dimensionality of training and testing data points. This study proposes a feature selection method that uses a multivariate sample similarity measure. The method selects features with significant contributions using a machine-learning model. The multivariate sample similarity measure is evaluated using the University of California, Irvine heart disease dataset and compared with existing feature selection methods. The multivariate sample similarity measure is evaluated with metrics such as minimum subset selected, accuracy, F1-score, and area under the curve (AUC). The results show that the proposed method is able to diagnose chest pain, thallium scan, and major vessels scanned using X-rays with a high capability to distinguish between healthy and heart disease patients with a 99.6% accuracy

    Structural health monitoring of offshore wind turbines: A review through the Statistical Pattern Recognition Paradigm

    Get PDF
    Offshore Wind has become the most profitable renewable energy source due to the remarkable development it has experienced in Europe over the last decade. In this paper, a review of Structural Health Monitoring Systems (SHMS) for offshore wind turbines (OWT) has been carried out considering the topic as a Statistical Pattern Recognition problem. Therefore, each one of the stages of this paradigm has been reviewed focusing on OWT application. These stages are: Operational Evaluation; Data Acquisition, Normalization and Cleansing; Feature Extraction and Information Condensation; and Statistical Model Development. It is expected that optimizing each stage, SHMS can contribute to the development of efficient Condition-Based Maintenance Strategies. Optimizing this strategy will help reduce labor costs of OWTs׳ inspection, avoid unnecessary maintenance, identify design weaknesses before failure, improve the availability of power production while preventing wind turbines׳ overloading, therefore, maximizing the investments׳ return. In the forthcoming years, a growing interest in SHM technologies for OWT is expected, enhancing the potential of offshore wind farm deployments further offshore. Increasing efficiency in operational management will contribute towards achieving UK׳s 2020 and 2050 targets, through ultimately reducing the Levelised Cost of Energy (LCOE)
    • …
    corecore