62,255 research outputs found

    Continuous maintenance and the future ā€“ Foundations and technological challenges

    Get PDF
    High value and long life products require continuous maintenance throughout their life cycle to achieve required performance with optimum through-life cost. This paper presents foundations and technologies required to offer the maintenance service. Component and system level degradation science, assessment and modelling along with life cycle ā€˜big dataā€™ analytics are the two most important knowledge and skill base required for the continuous maintenance. Advanced computing and visualisation technologies will improve efficiency of the maintenance and reduce through-life cost of the product. Future of continuous maintenance within the Industry 4.0 context also identifies the role of IoT, standards and cyber security

    Machine prognostics based on health state estimation using SVM

    Get PDF
    The ability to accurately predict the remaining useful life of machine components is critical for continuous operations in machines which can also improve productivity and enhance system safety. In condition-based maintenance (CBM), effective diagnostics and prognostics are important aspects of CBM which provide sufficient time for maintenance engineers to schedule a repair and acquire replacement components before the components finally fail. All machine components have certain characteristics of failure patterns and are subjected to degradation processes in real environments. This paper describes a technique for accurate assessment of the remnant life of machines based on prior expert knowledge embedded in closed loop prognostics systems. The technique uses Support Vector Machines (SVM) for classification of faults and evaluation of health for six stages of bearing degradation. To validate the feasibility of the proposed model, several fault historical data from High Pressure Liquefied Natural Gas (LNG) pumps were analysed to obtain their failure patterns. The results obtained were very encouraging and the prediction closely matched the real life particularly at the end of term of the bearings

    Machine Prognosis with Full Utilization of Truncated Lifetime Data

    Get PDF
    Intelligent machine fault prognostics estimates how soon and likely a failure will occur with little human expert judgement. It minimizes production downtime, spares inventory and maintenance labour costs. Prognostic models, especially probabilistic methods, require numerous historical failure instances. In practice however, industrial and military communities would rarely allow their engineering assets to run to failure. It is only known that the machine component survived up to the time of repair or replacement but there is no information as to when the component would have failed if left undisturbed. Data of this sort are called truncated data. This paper proposes a novel model, the Intelligent Product Limit Estimator (iPLE), which utilizes truncated data to perform adaptive long-range prediction of a machine component's remaining lifetime. It takes advantage of statistical models' ability to provide useful representation of survival probabilities, and of neural networks ability to recognise nonlinear relationships between a machine component's future survival condition and a given series of prognostic data features. Progressive bearing degradation data were simulated and used to train and validate the proposed model. The results support our hypothesis that the iPLE can perform better than similar prognostics models that neglect truncated data

    Security aspects in cloud based condition monitoring of machine tools

    Get PDF
    In the modern competitive environments companies must have rapid production systems that are able to deliver parts that satisfy highest quality standards. Companies have also an increased need for advanced machines equipped with the latest technologies in maintenance to avoid any reduction or interruption of production. Eminent therefore is the need to monitor the health status of the manufacturing equipment in real time and thus try to develop diagnostic technologies for machine tools. This paper lays the foundation for the creation of a safe remote monitoring system for machine tools using a Cloud environment for communication between the customer and the maintenance service company. Cloud technology provides a convenient means for accessing maintenance data anywhere in the world accessible through simple devices such as PC, tablets or smartphones. In this context the safety aspects of a Cloud system for remote monitoring of machine tools becomes crucial and is, thus the focus of this pape

    A convolutional neural network aided physical model improvement for AC solenoid valves diagnosis

    Get PDF
    This paper focuses on the development of a physics-based diagnostic tool for alternating current (AC) solenoid valves which are categorized as critical components of many machines used in the process industry. Signal processing and machine learning based approaches have been proposed in the literature to diagnose the health state of solenoid valves. However, the approaches do not give a physical explanation of the failure modes. In this work, being capable of diagnosing failure modes while using a physically interpretable model is proposed. Feature attribution methods are applied to CNN on a large data set of the current signals acquired from accelerated life tests of several AC solenoid valves. The results reveal important regions of interest on current signals that guide the modeling of the main missing component of an existing physical model. Two model parameters, which are the shading ring and kinetic coulomb forces, are then identified using current measurements along the lifetime of valves. Consistent trends are found for both parameters allowing to diagnose the failure modes of the solenoid valves. Future work will consist of not only diagnosing the failure modes, but also of predicting the remaining useful life

    An evaluation of intrusive instrumental intelligibility metrics

    Full text link
    Instrumental intelligibility metrics are commonly used as an alternative to listening tests. This paper evaluates 12 monaural intrusive intelligibility metrics: SII, HEGP, CSII, HASPI, NCM, QSTI, STOI, ESTOI, MIKNN, SIMI, SIIB, and sEPSMcorr\text{sEPSM}^\text{corr}. In addition, this paper investigates the ability of intelligibility metrics to generalize to new types of distortions and analyzes why the top performing metrics have high performance. The intelligibility data were obtained from 11 listening tests described in the literature. The stimuli included Dutch, Danish, and English speech that was distorted by additive noise, reverberation, competing talkers, pre-processing enhancement, and post-processing enhancement. SIIB and HASPI had the highest performance achieving a correlation with listening test scores on average of Ļ=0.92\rho=0.92 and Ļ=0.89\rho=0.89, respectively. The high performance of SIIB may, in part, be the result of SIIBs developers having access to all the intelligibility data considered in the evaluation. The results show that intelligibility metrics tend to perform poorly on data sets that were not used during their development. By modifying the original implementations of SIIB and STOI, the advantage of reducing statistical dependencies between input features is demonstrated. Additionally, the paper presents a new version of SIIB called SIIBGauss\text{SIIB}^\text{Gauss}, which has similar performance to SIIB and HASPI, but takes less time to compute by two orders of magnitude.Comment: Published in IEEE/ACM Transactions on Audio, Speech, and Language Processing, 201

    MLPerf Inference Benchmark

    Full text link
    Machine-learning (ML) hardware and software system demand is burgeoning. Driven by ML applications, the number of different ML inference systems has exploded. Over 100 organizations are building ML inference chips, and the systems that incorporate existing models span at least three orders of magnitude in power consumption and five orders of magnitude in performance; they range from embedded devices to data-center solutions. Fueling the hardware are a dozen or more software frameworks and libraries. The myriad combinations of ML hardware and ML software make assessing ML-system performance in an architecture-neutral, representative, and reproducible manner challenging. There is a clear need for industry-wide standard ML benchmarking and evaluation criteria. MLPerf Inference answers that call. In this paper, we present our benchmarking method for evaluating ML inference systems. Driven by more than 30 organizations as well as more than 200 ML engineers and practitioners, MLPerf prescribes a set of rules and best practices to ensure comparability across systems with wildly differing architectures. The first call for submissions garnered more than 600 reproducible inference-performance measurements from 14 organizations, representing over 30 systems that showcase a wide range of capabilities. The submissions attest to the benchmark's flexibility and adaptability.Comment: ISCA 202
    • ā€¦
    corecore