159 research outputs found

    On cost-effective reuse of components in the design of complex reconfigurable systems

    Get PDF
    Design strategies that benefit from the reuse of system components can reduce costs while maintaining or increasing dependability—we use the term dependability to tie together reliability and availability. D3H2 (aDaptive Dependable Design for systems with Homogeneous and Heterogeneous redundancies) is a methodology that supports the design of complex systems with a focus on reconfiguration and component reuse. D3H2 systematizes the identification of heterogeneous redundancies and optimizes the design of fault detection and reconfiguration mechanisms, by enabling the analysis of design alternatives with respect to dependability and cost. In this paper, we extend D3H2 for application to repairable systems. The method is extended with analysis capabilities allowing dependability assessment of complex reconfigurable systems. Analysed scenarios include time-dependencies between failure events and the corresponding reconfiguration actions. We demonstrate how D3H2 can support decisions about fault detection and reconfiguration that seek to improve dependability while reducing costs via application to a realistic railway case study

    Reliable diagnostics using wireless sensor networks

    Get PDF
    Monitoring activities in industry may require the use of wireless sensor networks, for instance due to difficult access or hostile environment. But it is well known that this type of networks has various limitations like the amount of disposable energy. Indeed, once a sensor node exhausts its resources, it will be dropped from the network, stopping so to forward information about maybe relevant features towards the sink. This will result in broken links and data loss which impacts the diagnostic accuracy at the sink level. It is therefore important to keep the network's monitoring service as long as possible by preserving the energy held by the nodes. As packet transfer consumes the highest amount of energy comparing to other activities in the network, various topologies are usually implemented in wireless sensor networks to increase the network lifetime. In this paper, we emphasize that it is more difficult to perform a good diagnostic when data are gathered by a wireless sensor network instead of a wired one, due to broken links and data loss on the one hand, and deployed network topologies on the other hand. Three strategies are considered to reduce packet transfers: (1) sensor nodes send directly their data to the sink, (2) nodes are divided by clusters, and the cluster heads send the average of their clusters directly to the sink, and (3)averaged data are sent from cluster heads to cluster heads in a hop-by-hop mode, leading to an avalanche of averages. Their impact on the diagnostic accuracy is then evaluated. We show that the use of random forests is relevant for diagnostics when data are aggregated through the network and when sensors stop to transmit their values when their batteries are emptied. This relevance is discussed qualitatively and evaluated numerically by comparing the random forests performance to state-of-the-art PHM approaches, namely: basic bagging of decision trees, support vector machine, multinomial naive Bayes, AdaBoost, and Gradient Boosting. Finally, a way to couple the two best methods, namely the random forests and the gradient boosting, is proposed by finding the best hyperparameters of the former by using the latter

    Instrumentation, Control, and Intelligent Systems

    Full text link

    Pragmatic Evaluation of Health Monitoring & Analysis Models from an Empirical Perspective

    Get PDF
    Implementing and deploying several linked modules that can conduct real-time analysis and recommendation of patient datasets is necessary for designing health monitoring and analysis models. These databases include, but are not limited to, blood test results, computer tomography (CT) scans, MRI scans, PET scans, and other imaging tests. A combination of signal processing and image processing methods are used to process them. These methods include data collection, pre-processing, feature extraction and selection, classification, and context-specific post-processing. Researchers have put forward a variety of machine learning (ML) and deep learning (DL) techniques to carry out these tasks, which help with the high-accuracy categorization of these datasets. However, the internal operational features and the quantitative and qualitative performance indicators of each of these models differ. These models also demonstrate various functional subtleties, contextual benefits, application-specific constraints, and deployment-specific future research directions. It is difficult for researchers to pinpoint models that perform well for their application-specific use cases because of the vast range of performance. In order to reduce this uncertainty, this paper discusses a review of several Health Monitoring & Analysis Models in terms of their internal operational features & performance measurements. Readers will be able to recognise models that are appropriate for their application-specific use cases based on this discussion. When compared to other models, it was shown that Convolutional Neural Networks (CNNs), Masked Region CNN (MRCNN), Recurrent NN (RNN), Q-Learning, and Reinforcement learning models had greater analytical performance. They are hence suitable for clinical use cases. These models' worse scaling performance is a result of their increased complexity and higher implementation costs. This paper compares evaluated models in terms of accuracy, computational latency, deployment complexity, scalability, and deployment cost metrics to analyse such scenarios. This comparison will help users choose the best models for their performance-specific use cases. In this article, a new Health Monitoring Metric (HMM), which integrates many performance indicators to identify the best-performing models under various real-time patient settings, is reviewed to make the process of model selection even easier for real-time scenarios

    A Literature Review of Fault Diagnosis Based on Ensemble Learning

    Get PDF
    The accuracy of fault diagnosis is an important indicator to ensure the reliability of key equipment systems. Ensemble learning integrates different weak learning methods to obtain stronger learning and has achieved remarkable results in the field of fault diagnosis. This paper reviews the recent research on ensemble learning from both technical and field application perspectives. The paper summarizes 87 journals in recent web of science and other academic resources, with a total of 209 papers. It summarizes 78 different ensemble learning based fault diagnosis methods, involving 18 public datasets and more than 20 different equipment systems. In detail, the paper summarizes the accuracy rates, fault classification types, fault datasets, used data signals, learners (traditional machine learning or deep learning-based learners), ensemble learning methods (bagging, boosting, stacking and other ensemble models) of these fault diagnosis models. The paper uses accuracy of fault diagnosis as the main evaluation metrics supplemented by generalization and imbalanced data processing ability to evaluate the performance of those ensemble learning methods. The discussion and evaluation of these methods lead to valuable research references in identifying and developing appropriate intelligent fault diagnosis models for various equipment. This paper also discusses and explores the technical challenges, lessons learned from the review and future development directions in the field of ensemble learning based fault diagnosis and intelligent maintenance

    Post-prognostics decision making in distributed MEMS-based systems

    Get PDF
    In this paper, the problem of using prognostics information of Micro-Electro-Mechanical Systems (MEMS) for post-prognostics decision in distributed MEMS-based systems is addressed. A strategy of postprognostics decision is proposed and then implemented in a distributed MEMS-based conveying surface. The surface is designed to convey fragile and tiny microobjects. The purpose is to use the prognostics results of the used MEMS in the form of Remaining Useful Life (RUL) to maintain as long as possible a good performance of the conveying surface. For that, a distributed algorithm for distributed decision making in dynamic conditions is proposed. In addition, a simulator to simulate the decision in the targeted system is developed. Simulation results show the importance of the postprognostics decision to optimize the utilization of the system and improve its performance

    Condition-based maintenance implementation: A literature review

    Get PDF
    Industrial companies are increasingly dependent on the availability and performance of their equipment to remain competitive. This circumstance demands accurate and timely maintenance actions in alignment with the organizational objectives. Condition-Based Maintenance (CBM) is a strategy that considers information about the equipment condition to recommend appropriate maintenance actions. The main purpose of CBM is to prevent functional failures or a significant performance decrease of the monitored equipment. CBM relies on a wide range of resources and techniques required to detect deviations from the normal operating conditions, diagnose incipient failures or predict the future condition of an asset. To obtain meaningful information for maintenance decision making, relevant data must be collected and properly analyzed. Recent advances in Big Data analytics and Internet of Things (IoT) enable real-time decision making based on abundant data acquired from several different sources. However, each appliance must be designed according to the equipment configuration and considering the nature of specific failure modes. CBM implementation is a complex matter, regardless of the equipment characteristics. Therefore, to ensure cost-effectiveness, it must be addressed in a systematic and organized manner, considering the technical and financial issues involved. This paper presents a literature review on approaches to support CBM implementation. Published studies and standards that provide guidelines to implement CBM are analyzed and compared. For each existing approach, the steps recommended to implement CBM are listed and the main gaps are identified. Based on the literature, factors that can affect the effective implementation of CBM are also highlighted and discussed.This work is supported by: European Structural and Investment Funds in the FEDER component, through the Operational Competitiveness and Internationalization Programme (COMPETE 2020) [Project nÂş 39479; Funding Reference: POCI-01-0247-FEDER-39479]
    • …
    corecore