11 research outputs found

    Enhancing land cover classification in remote sensing imagery using an optimal deep learning model

    Get PDF
    The land cover classification process, accomplished through Remote Sensing Imagery (RSI), exploits advanced Machine Learning (ML) approaches to classify different types of land cover within the geographical area, captured by the RS method. The model distinguishes various types of land cover under different classes, such as agricultural fields, water bodies, urban areas, forests, etc. based on the patterns present in these images. The application of Deep Learning (DL)-based land cover classification technique in RSI revolutionizes the accuracy and efficiency of land cover mapping. By leveraging the abilities of Deep Neural Networks (DNNs) namely, Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN), the technology can autonomously learn spatial and spectral features inherent to the RSI. The current study presents an Improved Sand Cat Swarm Optimization with Deep Learning-based Land Cover Classification (ISCSODL-LCC) approach on the RSIs. The main objective of the proposed method is to efficiently classify the dissimilar land cover types within the geographical area, pictured by remote sensing models. The ISCSODL-LCC technique utilizes advanced machine learning methods by employing the Squeeze-Excitation ResNet (SE-ResNet) model for feature extraction and the Stacked Gated Recurrent Unit (SGRU) mechanism for land cover classification. Since ‘manual hyperparameter tuning’ is an erroneous and laborious task, the AIMS Mathematics Volume 9, Issue 1, 140–159. hyperparameter selection is accomplished with the help of the Reptile Search Algorithm (RSA). The simulation analysis was conducted upon the ISCSODL-LCC model using two benchmark datasets and the results established the superior performance of the proposed model. The simulation values infer better outcomes of the ISCSODL-LCC method over other techniques with the maximum accuracy values such as 97.92% and 99.14% under India Pines and Pavia University datasets, respectively

    Reinforced concrete bridge damage detection using arithmetic optimization algorithm with deep feature fusion

    Get PDF
    Inspection of Reinforced Concrete (RC) bridges is critical in order to ensure its safety and conduct essential maintenance works. Earlier defect detection is vital to maintain the stability of the concrete bridges. The current bridge maintenance protocols rely mainly upon manual visual inspection, which is subjective, unreliable and labour-intensive one. On the contrary, computer vision technique, based on deep learning methods, is regarded as the latest technique for structural damage detection due to its end-to-end training without the need for feature engineering. The classification process assists the authorities and engineers in understanding the safety level of the bridge, thus making informed decisions regarding rehabilitation or replacement, and prioritising the repair and maintenance efforts. In this background, the current study develops an RC Bridge Damage Detection using an Arithmetic Optimization Algorithm with a Deep Feature Fusion (RCBDD-AOADFF) method. The purpose of the proposed RCBDD-AOADFF technique is to identify and classify different kinds of defects in RC bridges. In the presented RCBDD-AOADFF technique, the feature fusion process is performed using the Darknet-19 and Nasnet-Mobile models. For damage classification process, the attention-based Long Short-Term Memory (ALSTM) model is used. To enhance the classification results of the ALSTM model, the AOA is applied for the hyperparameter selection process. The performance of the RCBDD-AOADFF method was validated using the RC bridge damage dataset. The extensive analysis outcomes revealed the potentials of the RCBDD-AOADFF technique on RC bridge damage detection process

    Modified arithmetic optimization algorithm with Deep Learning based data analytics for depression detection

    Get PDF
    Depression detection is the procedure of recognizing the individuals exhibiting depression symptoms, which is a mental illness that is characterized by hopelessness, feelings of sadness, persistence and loss of interest in day-to-day activities. Depression detection in Social Networking Sites (SNS) is a challenging task due to the huge volume of data and its complicated variations. However, it is feasible to detect the depression of the individuals by examining the user-generated content utilizing Deep Learning (DL), Machine Learning (ML) and Natural Language Processing (NLP) approaches. These techniques demonstrate optimum outcomes in early and accurate detection of depression, which in turn can support in enhancing the treatment outcomes and avoid more complications related to depression. In order to provide more insights, both ML and DL approaches possibly offer unique features. These features support the evaluation of unique patterns that are hidden in online interactions and address them to expose the mental state amongst the SNS users. In the current study, we develop the Modified Arithmetic Optimization Algorithm with Deep Learning for Depression Detection in Twitter Data (MAOADL-DDTD) technique. The presented MAOADL-DDTD technique focuses on identification and classification of the depression sentiments in Twitter data. In the presented MAOADL-DDTD technique, the noise in the tweets is pre-processed in different ways. In addition to this, the Glove word embedding technique is used to extract the features from the preprocessed data. For depression detection, the Sparse Autoencoder (SAE) model is applied. The MAOA is used for optimum hyperparameter tuning of the SAE approach so as to optimize the performance of the SAE model, which helps in accomplishing better detection performance. The MAOADL-DDTD algorithm is simulated using the benchmark database and experimentally validated. The experimental values of the MAOADL-DDTD methodology establish its promising performance over another recent state-of-the-art approaches

    Endoscopic Image Analysis for Gastrointestinal Tract Disease Diagnosis Using Nature Inspired Algorithm With Deep Learning Approach

    No full text
    Endoscopic image analysis has played a pivotal function in the diagnosis and management of gastrointestinal (GI) tract diseases. Gastrointestinal endoscopy is a medical procedure where a flexible tube with an endoscope (camera) is inserted into the GI tract to visualize the inner lining of the colon, esophagus, stomach, and small intestine. The videos and images attained during endoscopy provide valuable data for detecting and monitoring a large number of GI diseases. Computer-assisted automated diagnosis technique helps to achieve accurate diagnoses and provide the patient the relevant medical care. Machine learning (ML) and deep learning (DL) methods have been exploited to endoscopic images for classifying diseases and providing diagnostic support. Convolutional Neural Networks (CNN) and other DL algorithms can learn to discriminate between various kinds of GI lesions based on visual properties. This study presents an Endoscopic Image Analysis for Gastrointestinal Tract Disease Diagnosis using an inspired Algorithm with Deep Learning (EIAGTD-NIADL) technique. The EIAGTD-NIADL technique intends to examine the endoscopic images using nature nature-inspired algorithm with a DL model for gastrointestinal tract disease detection and classification. To pre-process the input endoscopic images, the EIAGTD-NIADL technique uses a bilateral filtering (BF) approach. For feature extraction, the EIAGTD-NIADL technique applies an improved ShuffleNet model. To improve the efficacy of the improved ShuffleNet model, the EIAGTD-NIADL technique uses an improved spotted hyena optimizer (ISHO) algorithm. Finally, the classification process is performed by the use of the stacked long short-term memory (SLSTM) method. The experimental outcomes of the EIAGTD-NIADL system can be confirmed on benchmark medical image datasets. The obtained outcomes demonstrate the promising results of the EIAGTD-NIADL approach over other models

    Modeling of Botnet Detection Using Chaotic Binary Pelican Optimization Algorithm With Deep Learning on Internet of Things Environment

    No full text
    Nowadays, there are ample amounts of Internet of Things (IoT) devices interconnected to the networks, and with technological improvement, cyberattacks and security threads, for example, botnets, are rapidly evolving and emerging with high-risk attacks. A botnet is a network of compromised devices that are controlled by cyber attackers, frequently employed to perform different cyberattacks. Such attack disrupts IoT evolution by disrupting services and networks for IoT devices. Detecting botnets in an IoT environment includes finding abnormal patterns or behaviors that might indicate the existence of these malicious networks. Several researchers have proposed deep learning (DL) and machine learning (ML) approaches for identifying and categorizing botnet attacks in the IoT platform. Therefore, this study introduces a Botnet Detection using the Chaotic Binary Pelican Optimization Algorithm with Deep Learning (BNT-CBPOADL) technique in the IoT environment. The main aim of the BNT-CBPOADL method lies in the correct detection and categorization of botnet attacks in the IoT environment. In the BNT-CBPOADL method, Z-score normalization is applied for pre-processing. Besides, the CBPOA technique is derived for feature selection. The convolutional variational autoencoder (CVAE) method is applied for botnet detection. At last, the arithmetical optimization algorithm (AOA) is employed for the optimal hyperparameter tuning of the CVAE algorithm. The experimental valuation of the BNT-CBPOADL technique is tested on a Bot-IoT database. The experimentation outcomes inferred the supremacy of the BNT-CBPOADL method over other existing techniques with maximum accuracy of 99.50%

    Improved Coyote Optimization Algorithm and Deep Learning Driven Activity Recognition in Healthcare

    No full text
    Healthcare is an area of concern where the application of human-centred design practices and principles can enormously affect well-being and patient care. The provision of high-quality healthcare services requires a deep understanding of patients’ needs, experiences, and preferences. Human activity recognition (HAR) is paramount in healthcare monitoring by using machine learning (ML), sensor data, and artificial intelligence (AI) to track and discern individuals’ behaviours and physical movements. This technology allows healthcare professionals to remotely monitor patients, thereby ensuring they adhere to prescribed rehabilitation or exercise routines, and identify falls or anomalies, improving overall care and safety of the patient. HAR for healthcare monitoring, driven by deep learning (DL) algorithms, leverages neural networks and large quantities of sensor information to autonomously and accurately detect and track patients’ behaviors and physical activities. DL-based HAR provides a cutting-edge solution for healthcare professionals to provide precise and more proactive interventions, reducing the burden on healthcare systems and improving patient well-being while increasing the overall quality of care. Therefore, the study presents an improved coyote optimization algorithm with a deep learning-assisted HAR (ICOADL-HAR) approach for healthcare monitoring. The purpose of the ICOADL-HAR technique is to analyze the sensor information of the patients to determine the different kinds of activities. In the primary stage, the ICOADL-HAR model allows a data normalization process using the Z-score approach. For activity recognition, the ICOADL-HAR technique employs an attention-based long short-term memory (ALSTM) model. Finally, the hyperparameter tuning of the ALSTM model can be performed by using ICOA. The stimulation validation of the ICOADL-HAR model takes place using benchmark HAR datasets. The wide-ranging comparison analysis highlighted the improved recognition rate of the ICOADL-HAR method compared to other existing HAR approaches in terms of various measures

    Exploiting Hyperspectral Imaging and Optimal Deep Learning for Crop Type Detection and Classification

    No full text
    Hyperspectral imaging (HSI) plays a major role in agricultural remote sensing applications. Its data unit is the hyperspectral cube that contains spatial data in 2D but spectral band data of all the pixels in 3D. The classification accuracy of HSI was significantly enhanced by deploying either spatial or spectral features. HSIs are developed as a significant approach to achieve growth data monitoring and distinguish crop classes for precision agriculture, based on the reasonable spectral response to the crop features. The latest developments in deep learning (DL) and computer vision (CV) approaches permit the effectual detection and classification of distinct crop varieties on HSIs. At the same time, the hyperparameter tuning process plays a vital role in accomplishing effectual classification performance. The study introduces a dandelion optimizer with deep transfer learning-based crop type detection and classification (DODTL-CTDC) technique on HSI. The DODTL-CTDC technique makes use of the Xception model for the extraction of features from the HSI. In addition, the hyperparameter selection of the Xception model takes place using the DO algorithm. Moreover, the convolutional autoencoder (CAE) model is applied for the classification of crops into distinct classes. Furthermore, an arithmetic optimization algorithm (AOA) can be employed for optimal hyperparameter selection of the CAE model. The performance analysis of the DODTL-CTDC technique is assessed on the benchmark data set. The experimental outcomes demonstrate the betterment of the DODTL-CTDC method in the crop classification process

    An Automated Glowworm Swarm Optimization with an Inception-Based Deep Convolutional Neural Network for COVID-19 Diagnosis and Classification

    No full text
    Recently, the COVID-19 epidemic has had a major impact on day-to-day life of people all over the globe, and it demands various kinds of screening tests to detect the coronavirus. Conversely, the development of deep learning (DL) models combined with radiological images is useful for accurate detection and classification. DL models are full of hyperparameters, and identifying the optimal parameter configuration in such a high dimensional space is not a trivial challenge. Since the procedure of setting the hyperparameters requires expertise and extensive trial and error, metaheuristic algorithms can be employed. With this motivation, this paper presents an automated glowworm swarm optimization (GSO) with an inception-based deep convolutional neural network (IDCNN) for COVID-19 diagnosis and classification, called the GSO-IDCNN model. The presented model involves a Gaussian smoothening filter (GSF) to eradicate the noise that exists from the radiological images. Additionally, the IDCNN-based feature extractor is utilized, which makes use of the Inception v4 model. To further enhance the performance of the IDCNN technique, the hyperparameters are optimally tuned using the GSO algorithm. Lastly, an adaptive neuro-fuzzy classifier (ANFC) is used for classifying the existence of COVID-19. The design of the GSO algorithm with the ANFC model for COVID-19 diagnosis shows the novelty of the work. For experimental validation, a series of simulations were performed on benchmark radiological imaging databases to highlight the superior outcome of the GSO-IDCNN technique. The experimental values pointed out that the GSO-IDCNN methodology has demonstrated a proficient outcome by offering a maximal sensy of 0.9422, specy of 0.9466, precn of 0.9494, accy of 0.9429, and F1score of 0.9394

    A Bayesian Dynamic Inference Approach Based on Extracted Gray Level Co-Occurrence (GLCM) Features for the Dynamical Analysis of Congestive Heart Failure

    No full text
    The adoptability of the heart to external and internal stimuli is reflected by heart rate variability (HRV). Reduced HRV can be a predictor of post-infarction mortality. In this study, we propose an automated system to predict and diagnose congestive heart failure using short-term heart rate variability analysis. Based on the nonlinear, nonstationary, and highly complex dynamics of congestive heart failure, we extracted multimodal features to capture the temporal, spectral, and complex dynamics. Recently, the Bayesian inference approach has been recognized as an attractive option for the deeper analysis of static features, in order to perform a comprehensive analysis of extracted nodes (features). We computed the gray level co-occurrence (GLCM) features from congestive heart failure signals and then ranked them based on ROC methods. This study focused on utilizing the dissimilarity feature, which is ranked as highly important, as a target node for the empirical analysis of dynamic profiling and optimization, in order to explain the nonlinear dynamics of GLCM features extracted from heart failure signals, and distinguishing CHF from NSR. We applied Bayesian inference and Pearson’s correlation (PC). The association, in terms of node force and mapping, was computed. The higher-ranking target node was used to compute the posterior probability, total effect, arc contribution, network profile, and compression. The highest value of ROC was obtained for dissimilarity, at 0.3589. Based on the information-gain algorithm, the highest strength of the relationship was obtained between nodes “dissimilarity” and “cluster performance” (1.0146), relative to mutual information (81.33%). Moreover, the highest relative binary significance was yielded for dissimilarity for 1/3rd (80.19%), 2/3rd (74.95%) and 3/3rd (100%). The results revealed that the proposed methodology can provide further in-depth insights for the early diagnosis and prognosis of congestive heart failure

    An Automated Glowworm Swarm Optimization with an Inception-Based Deep Convolutional Neural Network for COVID-19 Diagnosis and Classification

    No full text
    Recently, the COVID-19 epidemic has had a major impact on day-to-day life of people all over the globe, and it demands various kinds of screening tests to detect the coronavirus. Conversely, the development of deep learning (DL) models combined with radiological images is useful for accurate detection and classification. DL models are full of hyperparameters, and identifying the optimal parameter configuration in such a high dimensional space is not a trivial challenge. Since the procedure of setting the hyperparameters requires expertise and extensive trial and error, metaheuristic algorithms can be employed. With this motivation, this paper presents an automated glowworm swarm optimization (GSO) with an inception-based deep convolutional neural network (IDCNN) for COVID-19 diagnosis and classification, called the GSO-IDCNN model. The presented model involves a Gaussian smoothening filter (GSF) to eradicate the noise that exists from the radiological images. Additionally, the IDCNN-based feature extractor is utilized, which makes use of the Inception v4 model. To further enhance the performance of the IDCNN technique, the hyperparameters are optimally tuned using the GSO algorithm. Lastly, an adaptive neuro-fuzzy classifier (ANFC) is used for classifying the existence of COVID-19. The design of the GSO algorithm with the ANFC model for COVID-19 diagnosis shows the novelty of the work. For experimental validation, a series of simulations were performed on benchmark radiological imaging databases to highlight the superior outcome of the GSO-IDCNN technique. The experimental values pointed out that the GSO-IDCNN methodology has demonstrated a proficient outcome by offering a maximal sensy of 0.9422, specy of 0.9466, precn of 0.9494, accy of 0.9429, and F1score of 0.9394
    corecore