23 research outputs found
Reinforced concrete bridge damage detection using arithmetic optimization algorithm with deep feature fusion
Inspection of Reinforced Concrete (RC) bridges is critical in order to ensure its safety and conduct essential maintenance works. Earlier defect detection is vital to maintain the stability of the concrete bridges. The current bridge maintenance protocols rely mainly upon manual visual inspection, which is subjective, unreliable and labour-intensive one. On the contrary, computer vision technique, based on deep learning methods, is regarded as the latest technique for structural damage detection due to its end-to-end training without the need for feature engineering. The classification process assists the authorities and engineers in understanding the safety level of the bridge, thus making informed decisions regarding rehabilitation or replacement, and prioritising the repair and maintenance efforts. In this background, the current study develops an RC Bridge Damage Detection using an Arithmetic Optimization Algorithm with a Deep Feature Fusion (RCBDD-AOADFF) method. The purpose of the proposed RCBDD-AOADFF technique is to identify and classify different kinds of defects in RC bridges. In the presented RCBDD-AOADFF technique, the feature fusion process is performed using the Darknet-19 and Nasnet-Mobile models. For damage classification process, the attention-based Long Short-Term Memory (ALSTM) model is used. To enhance the classification results of the ALSTM model, the AOA is applied for the hyperparameter selection process. The performance of the RCBDD-AOADFF method was validated using the RC bridge damage dataset. The extensive analysis outcomes revealed the potentials of the RCBDD-AOADFF technique on RC bridge damage detection process
Short-Term Load Forecasting in Smart Grids Using Hybrid Deep Learning
Load forecasting in Smart Grids (SG) is a major module of current energy management systems, that play a vital role in optimizing resource allocation, improving grid stability, and assisting the combination of renewable energy sources (RES). It contains the predictive of electricity consumption forms over certain time intervals. Load Forecasting remains a stimulating task as load data has exhibited changing patterns because of factors such as weather change and shifts in energy usage behaviour. The beginning of advanced data analytics and machine learning (ML) approaches; particularly deep learning (DL) has mostly enhanced load forecasting accuracy. Deep neural networks (DNNs) namely Long Short-Term Memory (LSTM) and Convolutional Neural Networks (CNN) have achieved popularity for their capability to capture difficult temporal dependencies in load data. This study designs a Short-Load Forecasting scheme using a Hybrid Deep Learning and Beluga Whale Optimization (LFS-HDLBWO) approach. The major intention of the LFS-HDLBWO technique is to predict the load in the SG environment. To accomplish this, the LFS-HDLBWO technique initially uses a Z-score normalization approach for scaling the input dataset. Besides, the LFS-HDLBWO technique makes use of convolutional bidirectional long short-term memory with an autoencoder (CBLSTM-AE) model for load prediction purposes. Finally, the BWO algorithm could be used for optimal hyperparameter selection of the CBLSTM-AE algorithm, which helps to enhance the overall prediction results. A wide-ranging experimental analysis was made to illustrate the better predictive results of the LFS-HDLBWO method. The obtained value demonstrates the outstanding performance of the LFS-HDLBWO system over other existing DL algorithms with a minimum average error rate of 3.43 and 2.26 under FE and Dayton grid datasets, respectively
Intelligent model for the detection and classification of encrypted network traffic in cloud infrastructure
This article explores detecting and categorizing network traffic data using machine-learning (ML) methods, specifically focusing on the Domain Name Server (DNS) protocol. DNS has long been susceptible to various security flaws, frequently exploited over time, making DNS abuse a major concern in cybersecurity. Despite advanced attack, tactics employed by attackers to steal data in real-time, ensuring security and privacy for DNS queries and answers remains challenging. The evolving landscape of internet services has allowed attackers to launch cyber-attacks on computer networks. However, implementing Secure Socket Layer (SSL)-encrypted Hyper Text Transfer Protocol (HTTP) transmission, known as HTTPS, has significantly reduced DNS-based assaults. To further enhance security and mitigate threats like man-in-the-middle attacks, the security community has developed the concept of DNS over HTTPS (DoH). DoH aims to combat the eavesdropping and tampering of DNS data during communication. This study employs a ML-based classification approach on a dataset for traffic analysis. The AdaBoost model effectively classified Malicious and Non-DoH traffic, with accuracies of 75% and 73% for DoH traffic. The support vector classification model with a Radial Basis Function (SVC-RBF) achieved a 76% accuracy in classifying between malicious and non-DoH traffic. The quadratic discriminant analysis (QDA) model achieved 99% accuracy in classifying malicious traffic and 98% in classifying non-DoH traffic
An Optimized Location-Based System for the Improvement of E-Commerce Systems
Wireless technology has an essential role in every field of life, especially e-commerce applications due to their high data rate, mobility-aware communication, support for real-time processing, security measures, and reliability. Improving the e-commerce application using wireless technology needs a more efficient, secure, and resource-friendly mobility solution in IoT of heterogeneous environments. In such an environment, proxy mobile IPv6 (PMIPv6) mobility provides a resource-efficient and cost-effective solution for large-scale IoT in terms of reducing handover latency and required signaling. The proposed paper achieves an energy-efficient 5G-enabled IoT to support improved mobility management in e-commerce by signaling cost, handover latency, and efficient buffering. To integrate mobile and dynamic protocols, optimized location-based PMIPv6 extensions are proposed that effectively utilize the information of network entities and provide efficient mobility management within a massive IoT environment. Location information is important in e-commerce along with RSS measures to progress the prediction accuracy of the handover moment and remove the problem of early handover initiation. For cost analysis of existing and proposed protocol extensions, mathematical models are derived and implemented using JAVA. IEEEScopu
A deep learning framework for the early detection of multi-retinal diseases.
Retinal images play a pivotal contribution to the diagnosis of various ocular conditions by ophthalmologists. Extensive research was conducted to enable early detection and timely treatment using deep learning algorithms for retinal fundus images. Quick diagnosis and treatment planning can be facilitated by deep learning models' ability to process images rapidly and deliver outcomes instantly. Our research aims to provide a non-invasive method for early detection and timely eye disease treatment using a Convolutional Neural Network (CNN). We used a dataset Retinal Fundus Multi-disease Image Dataset (RFMiD), which contains various categories of fundus images representing different eye diseases, including Media Haze (MH), Optic Disc Cupping (ODC), Diabetic Retinopathy (DR), and healthy images (WNL). Several pre-processing techniques were applied to improve the model's performance, such as data augmentation, cropping, resizing, dataset splitting, converting images to arrays, and one-hot encoding. CNNs have extracted extract pertinent features from the input color fundus images. These extracted features are employed to make predictive diagnostic decisions. In this article three CNN models were used to perform experiments. The model's performance is assessed utilizing statistical metrics such as accuracy, F1 score, recall, and precision. Based on the results, the developed framework demonstrates promising performance with accuracy rates of up to 89.81% for validation and 88.72% for testing using 12-layer CNN after Data Augmentation. The accuracy rate obtained from 20-layer CNN is 90.34% for validation and 89.59% for testing with Augmented data. The accuracy obtained from 20-layer CNN is greater but this model shows overfitting. These accuracy rates suggested that the deep learning model has learned to distinguish between different eye disease categories and healthy images effectively. This study's contribution lies in providing a reliable and efficient diagnostic system for the simultaneous detection of multiple eye diseases through the analysis of color fundus images
GenVis: Visualizing Genre Detection in Movie Trailers for Enhanced Understanding
Automatic movie genre detection is vital for improving content recommendations, user experiences, and organization. Multi-label generation detection assigns multiple labels to a movie and recognizes a movie’s diverse themes. Although there are many existing methods for generating multiple genre labels from movies but do not provide comprehensive analysis and visual depiction. This work introduces GenVis, a visualization system that provides a better understanding of multi-label genres extracted from movie trailers. The system initially uses text and visual features to classify trailers and assign multiple genre labels and probabilities. Next, GenVis provides four visualization views: a video view for trailer observation, an overall genre view for getting insights into genre distribution, a genre timeline view for temporal genre evolution, and finally, a genre flow summary for more focused genre analysis. The system allows users to pause the frames, sort the results, and process multiple videos. The multi-label classification is rigorously evaluated using MSE, cross-entropy loss, precision, recall, F1-score metrics, achieving high accuracy, and demonstrating strong genre correlations with notable precision in effectively classifying and distinguishing movie genres. Additionally, a user evaluation for visualization evaluation demonstrated the effectiveness and intuitive usability of GenVis with a high overall rating of 4.25 out of 5.0
Improved Coyote Optimization Algorithm and Deep Learning Driven Activity Recognition in Healthcare
Healthcare is an area of concern where the application of human-centred design practices and principles can enormously affect well-being and patient care. The provision of high-quality healthcare services requires a deep understanding of patients’ needs, experiences, and preferences. Human activity recognition (HAR) is paramount in healthcare monitoring by using machine learning (ML), sensor data, and artificial intelligence (AI) to track and discern individuals’ behaviours and physical movements. This technology allows healthcare professionals to remotely monitor patients, thereby ensuring they adhere to prescribed rehabilitation or exercise routines, and identify falls or anomalies, improving overall care and safety of the patient. HAR for healthcare monitoring, driven by deep learning (DL) algorithms, leverages neural networks and large quantities of sensor information to autonomously and accurately detect and track patients’ behaviors and physical activities. DL-based HAR provides a cutting-edge solution for healthcare professionals to provide precise and more proactive interventions, reducing the burden on healthcare systems and improving patient well-being while increasing the overall quality of care. Therefore, the study presents an improved coyote optimization algorithm with a deep learning-assisted HAR (ICOADL-HAR) approach for healthcare monitoring. The purpose of the ICOADL-HAR technique is to analyze the sensor information of the patients to determine the different kinds of activities. In the primary stage, the ICOADL-HAR model allows a data normalization process using the Z-score approach. For activity recognition, the ICOADL-HAR technique employs an attention-based long short-term memory (ALSTM) model. Finally, the hyperparameter tuning of the ALSTM model can be performed by using ICOA. The stimulation validation of the ICOADL-HAR model takes place using benchmark HAR datasets. The wide-ranging comparison analysis highlighted the improved recognition rate of the ICOADL-HAR method compared to other existing HAR approaches in terms of various measures
Enhanced Pelican Optimization Algorithm with Deep Learning-Driven Mitotic Nuclei Classification on Breast Histopathology Images
Breast cancer (BC) is a prevalent disease worldwide, and accurate diagnoses are vital for successful treatment. Histopathological (HI) inspection, particularly the detection of mitotic nuclei, has played a pivotal function in the prognosis and diagnosis of BC. It includes the detection and classification of mitotic nuclei within breast tissue samples. Conventionally, the detection of mitotic nuclei has been a subjective task and is time-consuming for pathologists to perform manually. Automatic classification using computer algorithms, especially deep learning (DL) algorithms, has been developed as a beneficial alternative. DL and CNNs particularly have shown outstanding performance in different image classification tasks, including mitotic nuclei classification. CNNs can learn intricate hierarchical features from HI images, making them suitable for detecting subtle patterns related to the mitotic nuclei. In this article, we present an Enhanced Pelican Optimization Algorithm with a Deep Learning-Driven Mitotic Nuclei Classification (EPOADL-MNC) technique on Breast HI. This developed EPOADL-MNC system examines the histopathology images for the classification of mitotic and non-mitotic cells. In this presented EPOADL-MNC technique, the ShuffleNet model can be employed for the feature extraction method. In the hyperparameter tuning procedure, the EPOADL-MNC algorithm makes use of the EPOA system to alter the hyperparameters of the ShuffleNet model. Finally, we used an adaptive neuro-fuzzy inference system (ANFIS) for the classification and detection of mitotic cell nuclei on histopathology images. A series of simulations took place to validate the improved detection performance of the EPOADL-MNC technique. The comprehensive outcomes highlighted the better outcomes of the EPOADL-MNC algorithm compared to existing DL techniques with a maximum accuracy of 97.83%
Gastrointestinal Cancer Detection and Classification Using African Vulture Optimization Algorithm With Transfer Learning
Gastrointestinal (GI) cancer comprises esophageal, gastric, colon and rectal tumors. The diagnosis of GI cancer often relies on medical imaging modalities namely magnetic resonance imaging (MRI), histopathological slides, endoscopy, and computed tomography (CT) scans. This provides particular details about the size, location, and characteristics of tumors. The high death rate for GI cancer patients shows that it is possible to increase analysis for a more personalized therapy strategy which leads to improved prognosis and few side effects although many extrapolative and predictive biomarkers exist. Gastrointestinal cancer classification is a challenging but vital area of research and application within medical imaging and machine learning. Artificial intelligence (AI) based diagnostic support system, especially convolution neural network (CNN) based image examination tool, has enormous potential in medical computer vision. The study presents a GI Cancer Detection and Classification utilizing the African Vulture Optimization Algorithm with Transfer Learning (GICDC-AVOADL) methodology. The major aim of the GICDC-AVOADL model is to examine GI images for the identification of cancer. To achieve this, the GICDC-AVOADL method makes use of an improved EfficientNet-B5 method to learn features from input images. Furthermore, AVOA is exploited for optimum hyperparameter selection of the improved EfficientNet-B5 method. The GICDC-AVOADL technique applies dilated convolutional autoencoder (DCAE) For GI cancer detection and classification. A complete set of simulations was conducted to examine the enhanced GI cancer detection performance of the GICDC-AVOADL technique. The extensive results inferred superior performance of the GICDC-AVOADL algorithm over current models
Transition-aware human activity recognition using an ensemble deep learning framework
Understanding human activities in daily life is of utmost importance, especially in the context of personalized and adaptive ubiquitous learning. Although existing HAR systems perform well-identifying activities based on their inter-spatial and temporal relationships, they lack in identifying the importance of accurately detecting postural transitions that not only enhance the activity recognition rate and reduced the error rate but also provides added motivation to explore and develop hybrid models. It's in this context we propose an ensemble approach of 1D-CNN and LSTM for the task of postural transition recognition, facilitated by wireless computing and wearable sensors. The proliferation of achieving ubiquitous learning will ultimately lead to the creation of adaptive devices enabled by various data analysis and relation learning techniques. Our approach is one of the methods that can be incorporated to enable seamless learning and acquire correlations with adaptive learning techniques. The experimental results on testing datasets including newly produced HAPT (Human Activities and Postural Transitions) show better classification accuracy than existing state-of-the-art HAR approaches (97.84% for transitional activities and 99.04% for dynamic human activities) indicating the capability of the model in ubiquitous learning scenarios and personalized and adaptive human learning environments