21 research outputs found

    Enhancing land cover classification in remote sensing imagery using an optimal deep learning model

    Get PDF
    The land cover classification process, accomplished through Remote Sensing Imagery (RSI), exploits advanced Machine Learning (ML) approaches to classify different types of land cover within the geographical area, captured by the RS method. The model distinguishes various types of land cover under different classes, such as agricultural fields, water bodies, urban areas, forests, etc. based on the patterns present in these images. The application of Deep Learning (DL)-based land cover classification technique in RSI revolutionizes the accuracy and efficiency of land cover mapping. By leveraging the abilities of Deep Neural Networks (DNNs) namely, Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN), the technology can autonomously learn spatial and spectral features inherent to the RSI. The current study presents an Improved Sand Cat Swarm Optimization with Deep Learning-based Land Cover Classification (ISCSODL-LCC) approach on the RSIs. The main objective of the proposed method is to efficiently classify the dissimilar land cover types within the geographical area, pictured by remote sensing models. The ISCSODL-LCC technique utilizes advanced machine learning methods by employing the Squeeze-Excitation ResNet (SE-ResNet) model for feature extraction and the Stacked Gated Recurrent Unit (SGRU) mechanism for land cover classification. Since ‘manual hyperparameter tuning’ is an erroneous and laborious task, the AIMS Mathematics Volume 9, Issue 1, 140–159. hyperparameter selection is accomplished with the help of the Reptile Search Algorithm (RSA). The simulation analysis was conducted upon the ISCSODL-LCC model using two benchmark datasets and the results established the superior performance of the proposed model. The simulation values infer better outcomes of the ISCSODL-LCC method over other techniques with the maximum accuracy values such as 97.92% and 99.14% under India Pines and Pavia University datasets, respectively

    A CAD System for the Early Detection of Lung Nodules Using Computed Tomography Scan Images

    No full text
    In this paper,  a computer-aided detection system is developed to detect lung nodules at an early stage using Computed Tomography (CT) scan images where lung nodules are one of the most important indicators to predict lung cancer. The developed system consists of four stages. First, the raw Computed Tomography lung  images were preprocessed to enhance the image contrast and eliminate noise. Second, an automatic segmentation procedure for human's lung and pulmonary nodule canddates (nodules, blood vessels) using a two-level thresholding technique and morphological operations. Third, a feature fusion technique that fuses four feature extraction techniques: the statistical features of first and second order, value histogram features, histogram of oriented gradients features, and texture features of gray level co-occurrence matrix based on wavelet coefficients was utilised to extract the main features. The fourth stage is the classifier. Three classifiers were used and their performance was compared in order to obtain the highest classification accuracy. These are; multi-layer feed-forward neural network, radial basis function neural network and support vector machine. The  performance of the proposed system was assessed using three quantitative parameters. These are: the classification accuracy rate, the sensitivity and the specificity. Forty standard computed tomography images containing 320 regions of interest obtained from an early lung cancer action project association were used to test and evaluate the developed system. The images consists of 40 computed tomography scan images. The results have shown that the fused features vector resulting from genetic algorithm as a feature selection technique and the support vector machine classifier give the highest classification accuracy rate, sensitivity and specificity values of 99.6%, 100% and 99.2%, respectively.</p

    A CAD System for the Early Detection of Lung Nodules Using Computed Tomography Scan Images

    No full text
    In this paper,  a computer-aided detection system is developed to detect lung nodules at an early stage using Computed Tomography (CT) scan images where lung nodules are one of the most important indicators to predict lung cancer. The developed system consists of four stages. First, the raw Computed Tomography lung  images were preprocessed to enhance the image contrast and eliminate noise. Second, an automatic segmentation procedure for human's lung and pulmonary nodule canddates (nodules, blood vessels) using a two-level thresholding technique and morphological operations. Third, a feature fusion technique that fuses four feature extraction techniques: the statistical features of first and second order, value histogram features, histogram of oriented gradients features, and texture features of gray level co-occurrence matrix based on wavelet coefficients was utilised to extract the main features. The fourth stage is the classifier. Three classifiers were used and their performance was compared in order to obtain the highest classification accuracy. These are; multi-layer feed-forward neural network, radial basis function neural network and support vector machine. The  performance of the proposed system was assessed using three quantitative parameters. These are: the classification accuracy rate, the sensitivity and the specificity. Forty standard computed tomography images containing 320 regions of interest obtained from an early lung cancer action project association were used to test and evaluate the developed system. The images consists of 40 computed tomography scan images. The results have shown that the fused features vector resulting from genetic algorithm as a feature selection technique and the support vector machine classifier give the highest classification accuracy rate, sensitivity and specificity values of 99.6%, 100% and 99.2%, respectively

    Chaotic Equilibrium Optimizer-Based Green Communication With Deep Learning Enabled Load Prediction in Internet of Things Environment

    No full text
    Currently, there is an emerging requirement for applications related to the Internet of Things (IoT). Though the capability of IoT applications is huge, there are frequent limitations namely energy optimization, heterogeneity of devices, memory, security, privacy, and load balancing (LB) that should be solved. Such constraints must be optimised to enhance the network&#x2019;s efficiency. Hence, the core objective of this study was to formulate the intelligent-related cluster head (CH) selection method to establish green communication in IoT. Therefore, this study develops a chaotic equilibrium optimizer-based green communication with deep learning-enabled load prediction (CEOGC-DLLP) in the IoT environment. The study recognizes the emerging need for IoT applications and acknowledges the critical challenges, such as energy optimization, device heterogeneity, memory constraints, security, privacy, and load balancing, which are essential to enhancing the efficiency of IoT networks. The presented CEOGC-DLLP technique mainly accomplishes green communication via clustering and future load prediction processes. To do so, the presented CEOGC-DLLP model derives the CEOGC technique with a fitness function encompassing multiple parameters. In addition, the presented CEOGC-DLLP technique follows the deep belief network (DBN) model for the load prediction process, which helps to balance the load among the IoT devices for effective green communication. The experimental assessment of the CEOGC-DLLP technique is performed and the outcomes are investigated under different aspects. The comparison study represents the supremacy of the CEOGC-DLLP method compared to existing techniques with a maximum throughput of 64662 packets and minimum MSE of 0.2956

    Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication

    No full text
    Wireless sensor networks (WSNs) have been developed recently to support several applications, including environmental monitoring, traffic control, smart battlefield, home automation, etc. WSNs include numerous sensors that can be dispersed around a specific node to achieve the computing process. In WSNs, routing becomes a very significant task that should be managed prudently. The main purpose of a routing algorithm is to send data between sensor nodes (SNs) and base stations (BS) to accomplish communication. A good routing protocol should be adaptive and scalable to the variations in network topologies. Therefore, a scalable protocol has to execute well when the workload increases or the network grows larger. Many complexities in routing involve security, energy consumption, scalability, connectivity, node deployment, and coverage. This article introduces a wavelet mutation with Aquila optimization-based routing (WMAO-EAR) protocol for wireless communication. The presented WMAO-EAR technique aims to accomplish an energy-aware routing process in WSNs. To do this, the WMAO-EAR technique initially derives the WMAO algorithm for the integration of wavelet mutation with the Aquila optimization (AO) algorithm. A fitness function is derived using distinct constraints, such as delay, energy, distance, and security. By setting a mutation probability P, every individual next to the exploitation and exploration phase process has the probability of mutation using the wavelet mutation process. For demonstrating the enhanced performance of the WMAO-EAR technique, a comprehensive simulation analysis is made. The experimental outcomes establish the betterment of the WMAO-EAR method over other recent approaches

    A novel automated Parkinson’s disease identification approach using deep learning and EEG

    No full text
    The neurological ailment known as Parkinson’s disease (PD) affects people throughout the globe. The neurodegenerative PD-related disorder primarily affects people in middle to late life. Motor symptoms such as tremors, muscle rigidity, and sluggish, clumsy movement are common in patients with this disorder. Genetic and environmental variables play significant roles in the development of PD. Despite much investigation, the root cause of this neurodegenerative disease is still unidentified. Clinical diagnostics rely heavily on promptly detecting such irregularities to slow or stop the progression of illnesses successfully. Because of its direct correlation with brain activity, electroencephalography (EEG) is an essential PD diagnostic technique. Electroencephalography, or EEG, data are biomarkers of brain activity changes. However, these signals are non-linear, non-stationary, and complicated, making analysis difficult. One must often resort to a lengthy human labor process to accomplish results using traditional machine-learning approaches. The breakdown, feature extraction, and classification processes are typical examples of these stages. To overcome these obstacles, we present a novel deep-learning model for the automated identification of Parkinson’s disease (PD). The Gabor transform, a standard method in EEG signal processing, was used to turn the raw data from the EEG recordings into spectrograms. In this research, we propose densely linked bidirectional long short-term memory (DLBLSTM), which first represents each layer as the sum of its hidden state plus the hidden states of all layers above it, then recursively transmits that representation to all layers below it. This study’s suggested deep learning model was trained using these spectrograms as input data. Using a robust sixfold cross-validation method, the proposed model showed excellent accuracy with a classification accuracy of 99.6%. The results indicate that the suggested algorithm can automatically identify PD

    An Effective Approach to Detect and Identify Brain Tumors Using Transfer Learning

    No full text
    Brain tumors are considered one of the most serious, prominent and life-threatening diseases globally. Brain tumors cause thousands of deaths every year around the globe because of the rapid growth of tumor cells. Therefore, timely analysis and automatic detection of brain tumors are required to save the lives of thousands of people around the globe. Recently, deep transfer learning (TL) approaches are most widely used to detect and classify the three most prominent types of brain tumors, i.e., glioma, meningioma and pituitary. For this purpose, we employ state-of-the-art pre-trained TL techniques to identify and detect glioma, meningioma and pituitary brain tumors. The aim is to identify the performance of nine pre-trained TL classifiers, i.e., Inceptionresnetv2, Inceptionv3, Xception, Resnet18, Resnet50, Resnet101, Shufflenet, Densenet201 and Mobilenetv2, by automatically identifying and detecting brain tumors using a fine-grained classification approach. For this, the TL algorithms are evaluated on a baseline brain tumor classification (MRI) dataset, which is freely available on Kaggle. Additionally, all deep learning (DL) models are fine-tuned with their default values. The fine-grained classification experiment demonstrates that the inceptionresnetv2 TL algorithm performs better and achieves the highest accuracy in detecting and classifying glioma, meningioma and pituitary brain tumors, and hence it can be classified as the best classification algorithm. We achieve 98.91% accuracy, 98.28% precision, 99.75% recall and 99% F-measure values with the inceptionresnetv2 TL algorithm, which out-performs the other DL algorithms. Additionally, to ensure and validate the performance of TL classifiers, we compare the efficacy of the inceptionresnetv2 TL algorithm with hybrid approaches, in which we use convolutional neural networks (CNN) for deep feature extraction and a Support Vector Machine (SVM) for classification. Similarly, the experiment’s results show that TL algorithms, and inceptionresnetv2 in particular, out-perform the state-of-the-art DL algorithms in classifying brain MRI images into glioma, meningioma, and pituitary. The hybrid DL approaches used in the experiments are Mobilnetv2, Densenet201, Squeeznet, Alexnet, Googlenet, Inceptionv3, Resnet50, Resnet18, Resnet101, Xception, Inceptionresnetv3, VGG19 and Shufflenet
    corecore