469 research outputs found

    A Novel Data Augmentation Convolutional Neural Network for Detecting Malaria Parasite in Blood Smear Images

    Get PDF
    Malaria fever is a potentially fatal disease caused by the Plasmodium parasite. Identifying Plasmodium parasites in blood smear images can help diagnose malaria fever rapidly and precisely. According to the World Health Organization (WHO), there were 241 million malaria cases and 627 000 deaths worldwide in 2020, while 95% of malaria cases and 96% of malaria deaths occurred in Africa. Also in Africa, children that are less than five years old accounted for an estimated 80% of all malaria deaths. To address the menace of malaria, this paper proposes a novel deep learning model, called a data augmentation convolutional neural network (DACNN), trained by reinforcement learning to tackle this problem. The performance of the proposed DACNN model is compared with CNN and directed acyclic graph convolutional neural network (DAGCNN) models. Results show that DACNN outperforms previous studies in processing and classification images. It achieved 94.79% classification accuracy in malaria blood sample images of balanced class dataset obtained from the Kaggle dataset. The proposed model can serve as an effective tool for the detection of malaria parasites in blood smear images.publishedVersio

    A Dual-Stream architecture based on Neural Turing Machine and Attention for the Remaining Useful Life Estimation problem

    Get PDF
    Estimating in a reliable way the Remaining Useful Life (RUL) of a mechanical component is a fundamental task in the field of Prognostics and Health Management (PHM). In recent years a greater availability of high quality sensors and easiness of data gathering gave rise to data-driven models based on deep learning for this task, which has recently seen the introduction of \u201cdual-stream\u201d architectures. In this paper we propose a dual-stream architecture to address the RUL estimation problem through the exploitation of a Neural Turing Machine (NTM) and a Multi-Head Attention (MHA) mechanism. The NTM is a content-based memory addressing system which gives each of the streams the ability to access to and interact with the memory and acts as a fusion technique. The MHA is an attention mechanism added as a mean for our architecture to identify the existing relations between different sensor data in order to reveal hidden patterns among them. To evaluate the performance of our model, we considered the C-MAPSS dataset, a benchmark dataset published by NASA consisting of several time series related to the life of turbofan engines. We show that our approach achieves the best prediction score (which measures the safety of the predictions) in the available literature on two of the C-MAPSS subdatasets

    Fully-Connected Spatial-Temporal Graph for Multivariate Time Series Data

    Full text link
    Multivariate Time-Series (MTS) data is crucial in various application fields. With its sequential and multi-source (multiple sensors) properties, MTS data inherently exhibits Spatial-Temporal (ST) dependencies, involving temporal correlations between timestamps and spatial correlations between sensors in each timestamp. To effectively leverage this information, Graph Neural Network-based methods (GNNs) have been widely adopted. However, existing approaches separately capture spatial dependency and temporal dependency and fail to capture the correlations between Different sEnsors at Different Timestamps (DEDT). Overlooking such correlations hinders the comprehensive modelling of ST dependencies within MTS data, thus restricting existing GNNs from learning effective representations. To address this limitation, we propose a novel method called Fully-Connected Spatial-Temporal Graph Neural Network (FC-STGNN), including two key components namely FC graph construction and FC graph convolution. For graph construction, we design a decay graph to connect sensors across all timestamps based on their temporal distances, enabling us to fully model the ST dependencies by considering the correlations between DEDT. Further, we devise FC graph convolution with a moving-pooling GNN layer to effectively capture the ST dependencies for learning effective representations. Extensive experiments show the effectiveness of FC-STGNN on multiple MTS datasets compared to SOTA methods.Comment: 9 pages, 8 figure

    A deep attention based approach for predictive maintenance applications in IoT scenarios

    Get PDF
    Purpose: The recent innovations of Industry 4.0 have made it possible to easily collect data related to a production environment. In this context, information about industrial equipment – gathered by proper sensors – can be profitably used for supporting predictive maintenance (PdM) through the application of data-driven analytics based on artificial intelligence (AI) techniques. Although deep learning (DL) approaches have proven to be a quite effective solutions to the problem, one of the open research challenges remains – the design of PdM methods that are computationally efficient, and most importantly, applicable in real-world internet of things (IoT) scenarios, where they are required to be executable directly on the limited devices’ hardware. Design/methodology/approach: In this paper, the authors propose a DL approach for PdM task, which is based on a particular and very efficient architecture. The major novelty behind the proposed framework is to leverage a multi-head attention (MHA) mechanism to obtain both high results in terms of remaining useful life (RUL) estimation and low memory model storage requirements, providing the basis for a possible implementation directly on the equipment hardware. Findings: The achieved experimental results on the NASA dataset show how the authors’ approach outperforms in terms of effectiveness and efficiency the majority of the most diffused state-of-the-art techniques. Research limitations/implications: A comparison of the spatial and temporal complexity with a typical long-short term memory (LSTM) model and the state-of-the-art approaches was also done on the NASA dataset. Despite the authors’ approach achieving similar effectiveness results with respect to other approaches, it has a significantly smaller number of parameters, a smaller storage volume and lower training time. Practical implications: The proposed approach aims to find a compromise between effectiveness and efficiency, which is crucial in the industrial domain in which it is important to maximize the link between performance attained and resources allocated. The overall accuracy performances are also on par with the finest methods described in the literature. Originality/value: The proposed approach allows satisfying the requirements of modern embedded AI applications (reliability, low power consumption, etc.), finding a compromise between efficiency and effectiveness
    corecore