10 research outputs found

    A Different Traditional Approach for Automatic Comparative Machine Learning in Multimodality Covid-19 Severity Recognition

    Get PDF
    In March 2020, the world health organization introduced a new infectious pandemic called “novel coronavirus disease” or “Covid-19”, origin dates back to World War II (1939) and spread from the city of Wuhan in China (2019). The severity of the outbreak affected the health of abundant folk worldwide. This bred the emergence of unimodal artificial intelligence approaches in the diagnosis of coronavirus disease but solely led to a significant percentage of false-negative results. In this paper, we combined 2500 Covid-19 multimodal data based on Early Fusion Type-I (EFT1) architecture as a severity recognition model for the classification task. We designed and implemented one-step systems of automatic comparative machine learning (AutoCML) and automatic comparative machine learning based on important feature selection (AutoIFSCML). We utilized our posed assessment method called “Descended Composite Scores Average (DCSA)”. In AutoCML, Extreme Gradient Boost (DCSA=0.998) and in AutoIFSCML, Random Forest (DCSA=0.960) demonstrated the best performance for multimodality Covid-19 severity recognition while 70% of the characteristics with high DCSA were chosen by the internal important features selection system (AutoIFS) to enter the AutoCML system. The DCSA-based designed systems can be useful in implementing fine-tuned machine learning models in medical processes by leveraging the capacities and performances of the model in all methods. As well as, ensemble learning made sounds good among evaluated traditional models in systems

    Automatic Detection of Driver Fatigue Based on EEG Signals Using a Developed Deep Neural Network

    Get PDF
    In recent years, detecting driver fatigue has been a significant practical necessity and issue. Even though several investigations have been undertaken to examine driver fatigue, there are relatively few standard datasets on identifying driver fatigue. For earlier investigations, conventional methods relying on manual characteristics were utilized to assess driver fatigue. In any case study, such approaches need previous information for feature extraction, which could raise computing complexity. The current work proposes a driver fatigue detection system, which is a fundamental necessity to minimize road accidents. Data from 11 people are gathered for this purpose, resulting in a comprehensive dataset. The dataset is prepared in accordance with previously published criteria. A deep convolutional neural network–long short-time memory (CNN–LSTM) network is conceived and evolved to extract characteristics from raw EEG data corresponding to the six active areas A, B, C, D, E (based on a single channel), and F. The study’s findings reveal that the suggested deep CNN–LSTM network could learn features hierarchically from raw EEG data and attain a greater precision rate than previous comparative approaches for two-stage driver fatigue categorization. The suggested approach may be utilized to construct automatic fatigue detection systems because of their precision and high speed

    Boosting Iris Recognition by Margin-Based Loss Functions

    No full text
    In recent years, the topic of contactless biometric identification has gained considerable traction due to the COVID-19 pandemic. One of the most well-known identification technologies is iris recognition. Determining the classification threshold for large datasets of iris images remains challenging. To solve this issue, it is essential to extract more discriminatory features from iris images. Choosing the appropriate loss function to enhance discrimination power is one of the most significant factors in deep learning networks. This paper proposes a novel iris identification framework that integrates the light-weight MobileNet architecture with customized ArcFace and Triplet loss functions. By combining two loss functions, it is possible to improve the compactness within a class and the discrepancies between classes. To reduce the amount of preprocessing, the normalization step is omitted and segmented iris images are used directly. In contrast to the original SoftMax loss, the EER for the combined loss from ArcFace and Triplet is decreased from 1.11% to 0.45%, and the TPR is increased from 99.77% to 100%. In CASIA-Iris-Thousand, EER decreased from 4.8% to 1.87%, while TPR improved from 97.42% to 99.66%. Experiments have demonstrated that the proposed approach with customized loss using ArcFace and Triplet can significantly improve state-of-the-art and achieve outstanding results

    The Design of a Photonic Crystal Fiber for Hydrogen Cyanide Gas Detection

    No full text
    Hydrogen cyanide gas is a dangerous and fatal gas that is one of the causes of air pollution in the environment. A small percentage of this gas causes poisoning and eventually death. In this paper, a new PCF is designed that offers high sensitivity and low confinement loss in the absorption wavelength of hydrogen cyanide gas. The proposed structure consists of circular layers that are located around the core, which is also composed of circular microstructures. The finite element method (FEM) is used to simulate the results. According to the results, the PCF gives a high relative sensitivity of 65.13% and a low confinement loss of 1.5 × 10−3 dB/m at a wavelength of 1.533 ”m. The impact of increasing the concentration of hydrogen cyanide gas on the relative sensitivity and confinement loss is investigated. The high sensitivity and low confinement losses of the designed PCF show that this optical structure could be a good candidate for the detection of this gas in industrial and medical environments

    Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)

    No full text
    Movement-based brain–computer Interfaces (BCI) rely significantly on the automatic identification of movement intent. They also allow patients with motor disorders to communicate with external devices. The extraction and selection of discriminative characteristics, which often boosts computer complexity, is one of the issues with automatically discovered movement intentions. This research introduces a novel method for automatically categorizing two-class and three-class movement-intention situations utilizing EEG data. In the suggested technique, the raw EEG input is applied directly to a convolutional neural network (CNN) without feature extraction or selection. According to previous research, this is a complex approach. Ten convolutional layers are included in the suggested network design, followed by two fully connected layers. The suggested approach could be employed in BCI applications due to its high accuracy

    Automatically Identified EEG Signals of Movement Intention Based on CNN Network (End-To-End)

    No full text
    Movement-based brain–computer Interfaces (BCI) rely significantly on the automatic identification of movement intent. They also allow patients with motor disorders to communicate with external devices. The extraction and selection of discriminative characteristics, which often boosts computer complexity, is one of the issues with automatically discovered movement intentions. This research introduces a novel method for automatically categorizing two-class and three-class movement-intention situations utilizing EEG data. In the suggested technique, the raw EEG input is applied directly to a convolutional neural network (CNN) without feature extraction or selection. According to previous research, this is a complex approach. Ten convolutional layers are included in the suggested network design, followed by two fully connected layers. The suggested approach could be employed in BCI applications due to its high accuracy

    Visual Saliency and Image Reconstruction from EEG Signals via an Effective Geometric Deep Network-Based Generative Adversarial Network

    No full text
    Reaching out the function of the brain in perceiving input data from the outside world is one of the great targets of neuroscience. Neural decoding helps us to model the connection between brain activities and the visual stimulation. The reconstruction of images from brain activity can be achieved through this modelling. Recent studies have shown that brain activity is impressed by visual saliency, the important parts of an image stimuli. In this paper, a deep model is proposed to reconstruct the image stimuli from electroencephalogram (EEG) recordings via visual saliency. To this end, the proposed geometric deep network-based generative adversarial network (GDN-GAN) is trained to map the EEG signals to the visual saliency maps corresponding to each image. The first part of the proposed GDN-GAN consists of Chebyshev graph convolutional layers. The input of the GDN part of the proposed network is the functional connectivity-based graph representation of the EEG channels. The output of the GDN is imposed to the GAN part of the proposed network to reconstruct the image saliency. The proposed GDN-GAN is trained using the Google Colaboratory Pro platform. The saliency metrics validate the viability and efficiency of the proposed saliency reconstruction network. The weights of the trained network are used as initial weights to reconstruct the grayscale image stimuli. The proposed network realizes the image reconstruction from EEG signals

    A Customized Efficient Deep Learning Model for the Diagnosis of Acute Leukemia Cells Based on Lymphocyte and Monocyte Images

    No full text
    The production of blood cells is affected by leukemia, a type of bone marrow cancer or blood cancer. Deoxyribonucleic acid (DNA) is related to immature cells, particularly white cells, and is damaged in various ways in this disease. When a radiologist is involved in diagnosing acute leukemia cells, the diagnosis is time consuming and needs to provide better accuracy. For this purpose, many types of research have been conducted for the automatic diagnosis of acute leukemia. However, these studies have low detection speed and accuracy. Machine learning and artificial intelligence techniques are now playing an essential role in medical sciences, particularly in detecting and classifying leukemic cells. These methods assist doctors in detecting diseases earlier, reducing their workload and the possibility of errors. This research aims to design a deep learning model with a customized architecture for detecting acute leukemia using images of lymphocytes and monocytes. This study presents a novel dataset containing images of Acute Lymphoblastic Leukemia (ALL) and Acute Myeloid Leukemia (AML). The new dataset has been created with the assistance of various experts to help the scientific community in its efforts to incorporate machine learning techniques into medical research. Increasing the scale of the dataset is achieved with a Generative Adversarial Network (GAN). The proposed CNN model based on the Tversky loss function includes six convolution layers, four dense layers, and a Softmax activation function for the classification of acute leukemia images. The proposed model achieved a 99% accuracy rate in diagnosing acute leukemia types, including ALL and AML. Compared to previous research, the proposed network provides a promising performance in terms of speed and accuracy; and based on the results, the proposed model can be used to assist doctors and specialists in practical applications

    Concurrent Learning Approach for Estimation of Pelvic Tilt from Anterior–Posterior Radiograph

    No full text
    Accurate and reliable estimation of the pelvic tilt is one of the essential pre-planning factors for total hip arthroplasty to prevent common post-operative complications such as implant impingement and dislocation. Inspired by the latest advances in deep learning-based systems, our focus in this paper has been to present an innovative and accurate method for estimating the functional pelvic tilt (PT) from a standing anterior–posterior (AP) radiography image. We introduce an encoder–decoder-style network based on a concurrent learning approach called VGG-UNET (VGG embedded in U-NET), where a deep fully convolutional network known as VGG is embedded at the encoder part of an image segmentation network, i.e., U-NET. In the bottleneck of the VGG-UNET, in addition to the decoder path, we use another path utilizing light-weight convolutional and fully connected layers to combine all extracted feature maps from the final convolution layer of VGG and thus regress PT. In the test phase, we exclude the decoder path and consider only a single target task i.e., PT estimation. The absolute errors obtained using VGG-UNET, VGG, and Mask R-CNN are 3.04 ± 2.49, 3.92 ± 2.92, and 4.97 ± 3.87, respectively. It is observed that the VGG-UNET leads to a more accurate prediction with a lower standard deviation (STD). Our experimental results demonstrate that the proposed multi-task network leads to a significantly improved performance compared to the best-reported results based on cascaded networks
    corecore