18 research outputs found

    Enhancing land cover classification in remote sensing imagery using an optimal deep learning model

    Get PDF
    The land cover classification process, accomplished through Remote Sensing Imagery (RSI), exploits advanced Machine Learning (ML) approaches to classify different types of land cover within the geographical area, captured by the RS method. The model distinguishes various types of land cover under different classes, such as agricultural fields, water bodies, urban areas, forests, etc. based on the patterns present in these images. The application of Deep Learning (DL)-based land cover classification technique in RSI revolutionizes the accuracy and efficiency of land cover mapping. By leveraging the abilities of Deep Neural Networks (DNNs) namely, Convolutional Neural Networks (CNN) or Recurrent Neural Networks (RNN), the technology can autonomously learn spatial and spectral features inherent to the RSI. The current study presents an Improved Sand Cat Swarm Optimization with Deep Learning-based Land Cover Classification (ISCSODL-LCC) approach on the RSIs. The main objective of the proposed method is to efficiently classify the dissimilar land cover types within the geographical area, pictured by remote sensing models. The ISCSODL-LCC technique utilizes advanced machine learning methods by employing the Squeeze-Excitation ResNet (SE-ResNet) model for feature extraction and the Stacked Gated Recurrent Unit (SGRU) mechanism for land cover classification. Since ‘manual hyperparameter tuning’ is an erroneous and laborious task, the AIMS Mathematics Volume 9, Issue 1, 140–159. hyperparameter selection is accomplished with the help of the Reptile Search Algorithm (RSA). The simulation analysis was conducted upon the ISCSODL-LCC model using two benchmark datasets and the results established the superior performance of the proposed model. The simulation values infer better outcomes of the ISCSODL-LCC method over other techniques with the maximum accuracy values such as 97.92% and 99.14% under India Pines and Pavia University datasets, respectively

    A CAD System for the Early Detection of Lung Nodules Using Computed Tomography Scan Images

    No full text
    In this paper,  a computer-aided detection system is developed to detect lung nodules at an early stage using Computed Tomography (CT) scan images where lung nodules are one of the most important indicators to predict lung cancer. The developed system consists of four stages. First, the raw Computed Tomography lung  images were preprocessed to enhance the image contrast and eliminate noise. Second, an automatic segmentation procedure for human's lung and pulmonary nodule canddates (nodules, blood vessels) using a two-level thresholding technique and morphological operations. Third, a feature fusion technique that fuses four feature extraction techniques: the statistical features of first and second order, value histogram features, histogram of oriented gradients features, and texture features of gray level co-occurrence matrix based on wavelet coefficients was utilised to extract the main features. The fourth stage is the classifier. Three classifiers were used and their performance was compared in order to obtain the highest classification accuracy. These are; multi-layer feed-forward neural network, radial basis function neural network and support vector machine. The  performance of the proposed system was assessed using three quantitative parameters. These are: the classification accuracy rate, the sensitivity and the specificity. Forty standard computed tomography images containing 320 regions of interest obtained from an early lung cancer action project association were used to test and evaluate the developed system. The images consists of 40 computed tomography scan images. The results have shown that the fused features vector resulting from genetic algorithm as a feature selection technique and the support vector machine classifier give the highest classification accuracy rate, sensitivity and specificity values of 99.6%, 100% and 99.2%, respectively.</p

    A CAD System for the Early Detection of Lung Nodules Using Computed Tomography Scan Images

    No full text
    In this paper,  a computer-aided detection system is developed to detect lung nodules at an early stage using Computed Tomography (CT) scan images where lung nodules are one of the most important indicators to predict lung cancer. The developed system consists of four stages. First, the raw Computed Tomography lung  images were preprocessed to enhance the image contrast and eliminate noise. Second, an automatic segmentation procedure for human's lung and pulmonary nodule canddates (nodules, blood vessels) using a two-level thresholding technique and morphological operations. Third, a feature fusion technique that fuses four feature extraction techniques: the statistical features of first and second order, value histogram features, histogram of oriented gradients features, and texture features of gray level co-occurrence matrix based on wavelet coefficients was utilised to extract the main features. The fourth stage is the classifier. Three classifiers were used and their performance was compared in order to obtain the highest classification accuracy. These are; multi-layer feed-forward neural network, radial basis function neural network and support vector machine. The  performance of the proposed system was assessed using three quantitative parameters. These are: the classification accuracy rate, the sensitivity and the specificity. Forty standard computed tomography images containing 320 regions of interest obtained from an early lung cancer action project association were used to test and evaluate the developed system. The images consists of 40 computed tomography scan images. The results have shown that the fused features vector resulting from genetic algorithm as a feature selection technique and the support vector machine classifier give the highest classification accuracy rate, sensitivity and specificity values of 99.6%, 100% and 99.2%, respectively

    Biomedical Image Analysis for Colon and Lung Cancer Detection Using Tuna Swarm Algorithm With Deep Learning Model

    No full text
    The domain of Artificial Intelligence (AI) is made important strides recently, leading to developments in several domains comprising biomedical diagnostics and research. The procedure of AI-based systems in biomedical analytics takes opened up novel avenues for the progress of disease analysis, drug discovery, and treatment. Cancer is the second major reason of death worldwide; around one in every six people pass away suffering from it. Among several kinds of cancers, the colon and lung variations are the most frequent and deadliest ones. Initial detection of conditions on both fronts significantly reduces the probability of mortality. Deep learning (DL) and Machine learning (ML) systems are exploited to speed up such cancer detection, permitting researchers to analyze a huge count of patients in a lesser time count and at a minimal cost. This study develops a new Biomedical Image Analysis for Colon and Lung Cancer Detection using Tuna Swarm Algorithm with Deep Learning (BICLCD-TSADL) model. The presented BICLCD-TSADL technique examines the biomedical images for the identification and classification of colon and lung cancer. To accomplish this, the BICLCD-TSADL technique applies Gabor filtering (GF) to preprocess the input images. In addition, the BICLCD-TSADL technique employs a GhostNet feature extractor to create a collection of feature vectors. Moreover, AFAO was executed to adjust the hyperparameters of the GhostNet technique. Furthermore, the TSA with echo state network (ESN) classifier is utilized for detecting lung and colon cancer. To demonstrate the more incredible outcome of the BICLCD-TSADL system, an extensive experimental outcome is carried out. The comprehensive comparative analysis highlighted the greater efficiency of the BICLCD-TSADL technique with other approaches with maximum accuracy of 99.33&#x0025;

    Advancing retinoblastoma detection based on binary arithmetic optimization and integrated features

    No full text
    Retinoblastoma, the most prevalent pediatric intraocular malignancy, can cause vision loss in children and adults worldwide. Adults may develop uveal melanoma. It is a hazardous tumor that can expand swiftly and destroy the eye and surrounding tissue. Thus, early retinoblastoma screening in children is essential. This work isolated retinal tumor cells, which is its main contribution. Tumors were also staged and subtyped. The methods let ophthalmologists discover and forecast retinoblastoma malignancy early. The approach may prevent blindness in infants and adults. Experts in ophthalmology now have more tools because of their disposal and the revolution in deep learning techniques. There are three stages to the suggested approach, and they are pre-processing, segmenting, and classification. The tumor is isolated and labeled on the base picture using various image processing techniques in this approach. Median filtering is initially used to smooth the pictures. The suggested method’s unique selling point is the incorporation of fused features, which result from combining those produced using deep learning models (DL) such as EfficientNet and CNN with those obtained by more conventional handmade feature extraction methods. Feature selection (FS) is carried out to enhance the performance of the suggested system further. Here, we present BAOA-S and BAOA-V, two binary variations of the newly introduced Arithmetic Optimization Algorithm (AOA), to perform feature selection. The malignancy and the tumor cells are categorized once they have been segmented. The suggested optimization method enhances the algorithm’s parameters, making it well-suited to multimodal pictures taken with varying illness configurations. The proposed system raises the methods’ accuracy, sensitivity, and specificity to 100, 99, and 99 percent, respectively. The proposed method is the most effective option and a viable alternative to existing solutions in the market

    Chaotic Equilibrium Optimizer-Based Green Communication With Deep Learning Enabled Load Prediction in Internet of Things Environment

    No full text
    Currently, there is an emerging requirement for applications related to the Internet of Things (IoT). Though the capability of IoT applications is huge, there are frequent limitations namely energy optimization, heterogeneity of devices, memory, security, privacy, and load balancing (LB) that should be solved. Such constraints must be optimised to enhance the network&#x2019;s efficiency. Hence, the core objective of this study was to formulate the intelligent-related cluster head (CH) selection method to establish green communication in IoT. Therefore, this study develops a chaotic equilibrium optimizer-based green communication with deep learning-enabled load prediction (CEOGC-DLLP) in the IoT environment. The study recognizes the emerging need for IoT applications and acknowledges the critical challenges, such as energy optimization, device heterogeneity, memory constraints, security, privacy, and load balancing, which are essential to enhancing the efficiency of IoT networks. The presented CEOGC-DLLP technique mainly accomplishes green communication via clustering and future load prediction processes. To do so, the presented CEOGC-DLLP model derives the CEOGC technique with a fitness function encompassing multiple parameters. In addition, the presented CEOGC-DLLP technique follows the deep belief network (DBN) model for the load prediction process, which helps to balance the load among the IoT devices for effective green communication. The experimental assessment of the CEOGC-DLLP technique is performed and the outcomes are investigated under different aspects. The comparison study represents the supremacy of the CEOGC-DLLP method compared to existing techniques with a maximum throughput of 64662 packets and minimum MSE of 0.2956

    Wavelet Mutation with Aquila Optimization-Based Routing Protocol for Energy-Aware Wireless Communication

    No full text
    Wireless sensor networks (WSNs) have been developed recently to support several applications, including environmental monitoring, traffic control, smart battlefield, home automation, etc. WSNs include numerous sensors that can be dispersed around a specific node to achieve the computing process. In WSNs, routing becomes a very significant task that should be managed prudently. The main purpose of a routing algorithm is to send data between sensor nodes (SNs) and base stations (BS) to accomplish communication. A good routing protocol should be adaptive and scalable to the variations in network topologies. Therefore, a scalable protocol has to execute well when the workload increases or the network grows larger. Many complexities in routing involve security, energy consumption, scalability, connectivity, node deployment, and coverage. This article introduces a wavelet mutation with Aquila optimization-based routing (WMAO-EAR) protocol for wireless communication. The presented WMAO-EAR technique aims to accomplish an energy-aware routing process in WSNs. To do this, the WMAO-EAR technique initially derives the WMAO algorithm for the integration of wavelet mutation with the Aquila optimization (AO) algorithm. A fitness function is derived using distinct constraints, such as delay, energy, distance, and security. By setting a mutation probability P, every individual next to the exploitation and exploration phase process has the probability of mutation using the wavelet mutation process. For demonstrating the enhanced performance of the WMAO-EAR technique, a comprehensive simulation analysis is made. The experimental outcomes establish the betterment of the WMAO-EAR method over other recent approaches

    Minimizing energy consumption for NOMA multi-drone communications in automotive-industry 5.0

    No full text
    The forthcoming era of the automotive industry, known as Automotive-Industry 5.0, will leverage the latest advancements in 6G communications technology to enable reliable, computationally advanced, and energy-efficient exchange of data between diverse onboard sensors, drones and other vehicles. We propose a non-orthogonal multiple access (NOMA) multi-drone communications network in order to address the requirements of enormous connections, various quality of services (QoS), ultra-reliability, and low latency in upcoming sixth-generation (6G) drone communications. Through the use of a power optimization framework, one of our goals is to evaluate the energy efficiency of the system. In particular, we define a non-convex power optimization problem while considering the possibility of imperfect successive interference cancellation (SIC) detection. Therefore, the goal is to reduce the total energy consumption of NOMA drone communications while guaranteeing the lowest possible rate for wireless devices. We use a novel method based on iterative sequential quadratic programming (SQP) to get the best possible solution to the non-convex optimization problem so that we may move on to the next step and solve it. The standard OMA framework, the Karush–Kuhn–Tucker (KKT)-based NOMA framework, and the average power NOMA framework are compared with the newly proposed optimization framework. The results of the Monte Carlo simulation demonstrate the accuracy of our derivations. The results that have been presented also demonstrate that the optimization framework that has been proposed is superior to previous benchmark frameworks in terms of system-achievable energy efficiency

    Explainable Artificial Intelligence Enabled TeleOphthalmology for Diabetic Retinopathy Grading and Classification

    No full text
    Recently, Telehealth connects patients to vital healthcare services via remote monitoring, wireless communications, videoconferencing, and electronic consults. By increasing access to specialists and physicians, telehealth assists in ensuring patients receive the proper care at the right time and right place. Teleophthalmology is a study of telemedicine that provides services for eye care using digital medical equipment and telecommunication technologies. Multimedia computing with Explainable Artificial Intelligence (XAI) for telehealth has the potential to revolutionize various aspects of our society, but several technical challenges should be resolved before this potential can be realized. Advances in artificial intelligence methods and tools reduce waste and wait times, provide service efficiency and better insights, and increase speed, the level of accuracy, and productivity in medicine and telehealth. Therefore, this study develops an XAI-enabled teleophthalmology for diabetic retinopathy grading and classification (XAITO-DRGC) model. The proposed XAITO-DRGC model utilizes OphthoAI IoMT headsets to enable remote monitoring of diabetic retinopathy (DR) disease. To accomplish this, the XAITO-DRGC model applies median filtering (MF) and contrast enhancement as a pre-processing step. In addition, the XAITO-DRGC model applies U-Net-based image segmentation and SqueezeNet-based feature extractor. Moreover, Archimedes optimization algorithm (AOA) with a bidirectional gated recurrent convolutional unit (BGRCU) is exploited for DR detection and classification. The experimental validation of the XAITO-DRGC method can be tested using a benchmark dataset and the outcomes are assessed under distinct prospects. Extensive comparison studies stated the enhancements of the XAITO-DRGC model over recent approaches
    corecore