5 research outputs found

    Bearing Fault Diagnosis with a Feature Fusion Method Based on an Ensemble Convolutional Neural Network and Deep Neural Network

    No full text
    Rolling bearings are the core components of rotating machinery. Their health directly affects the performance, stability and life of rotating machinery. To prevent possible damage, it is necessary to detect the condition of rolling bearings for fault diagnosis. With the rapid development of intelligent fault diagnosis technology, various deep learning methods have been applied in fault diagnosis in recent years. Convolution neural networks (CNN) have shown high performance in feature extraction. However, the pooling operation of CNN can lead to the loss of much valuable information and the relationship between the whole and the part may be ignored. In this study, we proposed CNNEPDNN, a novel bearing fault diagnosis model based on ensemble deep neural network (DNN) and CNN. We firstly trained CNNEPDNN model. Each of its local networks was trained with different training datasets. The CNN used vibration sensor signals as the input, whereas the DNN used nine time-domain statistical features from bearing vibration sensor signals as the input. Each local network of CNNEPDNN extracted different features from its own trained dataset, thus we fused features with different discrimination for fault recognition. CNNEPDNN was tested under 10 fault conditions based on the bearing data from Bearing Data Center of Case Western Reserve University (CWRU). To evaluate the proposed model, four aspects were analyzed: convergence speed of training loss function, test accuracy, F-Score and the feature clustering result by t-distributed stochastic neighbor embedding (t-SNE) visualization. The training loss function of the proposed model converged more quickly than the local models under different loads. The test accuracy of the proposed model is better than that of CNN, DNN and BPNN. The F-Score value of the model is higher than that of CNN model, and the feature clustering effect of the proposed model was better than that of CNN

    Bearing Fault Diagnosis with a Feature Fusion Method Based on an Ensemble Convolutional Neural Network and Deep Neural Network

    No full text
    Rolling bearings are the core components of rotating machinery. Their health directly affects the performance, stability and life of rotating machinery. To prevent possible damage, it is necessary to detect the condition of rolling bearings for fault diagnosis. With the rapid development of intelligent fault diagnosis technology, various deep learning methods have been applied in fault diagnosis in recent years. Convolution neural networks (CNN) have shown high performance in feature extraction. However, the pooling operation of CNN can lead to the loss of much valuable information and the relationship between the whole and the part may be ignored. In this study, we proposed CNNEPDNN, a novel bearing fault diagnosis model based on ensemble deep neural network (DNN) and CNN. We firstly trained CNNEPDNN model. Each of its local networks was trained with different training datasets. The CNN used vibration sensor signals as the input, whereas the DNN used nine time-domain statistical features from bearing vibration sensor signals as the input. Each local network of CNNEPDNN extracted different features from its own trained dataset, thus we fused features with different discrimination for fault recognition. CNNEPDNN was tested under 10 fault conditions based on the bearing data from Bearing Data Center of Case Western Reserve University (CWRU). To evaluate the proposed model, four aspects were analyzed: convergence speed of training loss function, test accuracy, F-Score and the feature clustering result by t-distributed stochastic neighbor embedding (t-SNE) visualization. The training loss function of the proposed model converged more quickly than the local models under different loads. The test accuracy of the proposed model is better than that of CNN, DNN and BPNN. The F-Score value of the model is higher than that of CNN model, and the feature clustering effect of the proposed model was better than that of CNN

    2D LiDAR and depth camera: analysis and data fusion for object differentiation

    Get PDF
    [ES] El objetivo de este proyecto es ofrecer un sistema de visión artificial para la detección e identificación de objetos a partir del empleo de un sensor lídar y una cámara de profundidad, basándose en la fusión de los datos procedentes de ambos y sirviéndose del entorno de programación de ROS. Previo al desarrollo de la parte práctica, se presenta el marco teórico introduciendo la visión artificial y se estudian los sensores más empleados en este ámbito, haciendo hincapié en el lídar, la cámara de profundidad, el radar y las combinaciones más recurrentes de estos tres. A continuación, se introduce la fusión de sensores, sus distintas clasificaciones, métodos y algoritmos. Asimismo, se profundiza en el análisis de las tecnologías empleadas: características técnicas y funcionamiento del lídar “RPLIDAR A1M8” y de la cámara de profundidad “Intel RealSense Depth Camera D435”, y se explican los conceptos generales, las herramientas específicas y los diferentes paquetes prediseñados del software utilizado “ROS”, como “YOLO” o “obstacle-detector-fusion”. Tras poner en práctica tres ensayos distintos y abordar los problemas surgidos durante su desarrollo, se concluye que el método óptimo para alcanzar los objetivos propuestos consistiría en una adaptación y combinación de una serie de paquetes prediseñados. De esta manera, se proporciona un resultado final que cumple fielmente con las expectativas, aunque presenta debilidades de precisión y velocidad que se explican por las limitaciones del equipo disponible, sobre todo en la parte correspondiente a la detección de YOLO.[EN] The aim of this project is to offer an artificial vision system for the detection and identification of objects using a lidar sensor and a depth camera, based on the fusion of data from both and using the ROS programming environment. Prior to the development of the practical part, a theoretical framework is presented introducing artificial vision and studying the most commonly used sensors in this field, focusing on the lidar, the depth camera, the radar and the most recurring combinations of these three. This is followed by an introduction to sensor fusion, its different classifications, methods and algorithms. Furthermore, the technologies used are analysed in depth: technical characteristics and performance of the lidar "RPLIDAR A1M8" and the depth camera "Intel RealSense Depth Camera D435". Likewise, general concepts, specific tools and different pre-designed used packages of the "ROS" software, such as "YOLO" or "obstacle-detector-fusion", are also explained. After implementing three different trials and addressing the problems that arose during their development, it is concluded that the optimal method to achieve the proposed objectives would consist of an adaptation and combination of a number of pre-designed packages. This provides a final result that faithfully meets the expectations, although it shows weaknesses in accuracy and speed that are explained by the limitations of the available equipment, especially in the YOLO detection part

    Artificial Intelligence Supported EV Electric Powertrain for Safety Improvement

    Get PDF
    As an environmentally friendly transport option, electric vehicles (EVs) are endowed with the characteristics of low fossil energy consumption and low pollutant emissions. In today's growing market share of EVs, the safety and reliability of the powertrain system will be directly related to the safety of human life. Reliability problems of EV powertrains may occur in any power electronic (PE) component and mechanical part, both sudden and cumulative. These faults in different locations and degrees will continuously threaten the life of drivers and pedestrians, bringing irreparable consequences. Therefore, monitoring and predicting the real-time health status of EV powertrain is a high-priority, arduous and challenging task. The purposes of this study are to develop AI-supported effective safety improvement techniques for EV powertrains. In the first place, a literature review is carried out to illustrate the up-to-date AI applications for solving condition monitoring and fault detection issues of EV powertrains, where recent case studies between conventional methods and AI-based methods in EV applications are compared and analysed. On this ground this study, then, focuses on the theories and techniques concerning this topic so as to tackle different challenges encountered in the actual applications. In detail, first, as for diagnosing the bearing system in the earlier fault period, a novel inferable deep distilled attention network is designed to detect multiple bearing faults. Second, a deep learning and simulation driven approach that combines the domain-adversarial neural network and the lumped-parameter thermal network (LPTN) is proposed for achieve IPMSM permanent magnet temperature estimation work. Finally, to ensure the use safety of the IGBT module, deep learning -based IGBT modules’ double pulse test (DPT) efficiency enhancement is proposed and achieved via multimodal fusion networks and graph convolution networks
    corecore